November 16, 2025
Supporting Children’s Use of AI

Children and young people are now growing up surrounded by AI, and the landscape is shifting fast.
In the UK, recent data from Ofcom and Internet Matters suggests that around half of children aged 8–17 regularly use generative AI tools such as ChatGPT, Bard or Snapchat’s MyAI. Many describe these interactions as feeling like conversations with a friend. A recent report from Common Sense Media found that 33% of teens had actually chosen to talk to an AI companion instead of a real person about something important or serious.
Whether children are asking voice assistants to answer their questions, relying on chatbots for bedtime stories, using learning apps for revision or engaging with large generative AI models, it’s essential to remember that most of these systems were built with adults in mind, not children. They often assume levels of attention, memory and emotional maturity that younger users simply don’t have. Even older children and teenagers, who increasingly use AI as a supportive confidante (often without adult supervision or knowledge), are still learning to navigate boundaries around trust, identity and emotion.
Our latest Researcher of the Month, Dr Nomisha Kurian, wants this to change. She has developed a new framework called Developmentally Aligned Design (DAD), which outlines how AI can be built with children’s needs, vulnerabilities and strengths at its core. She also chatted to us at Tooled Up, sharing practical tips on recognising when children may be relying too heavily on AI for emotional connection, how to talk to them about healthy boundaries, and how parents and educators can help children and young people use AI tools safely, creatively and critically.
Summary
Dr Kurian's framework invites developers to build AI that meets children where they are, adapting technology to their developmental needs. It’s built on four key principles, and these are exactly the kinds of things that all parents and educators should look out for when choosing AI tools for home or school, especially for younger children.
The first is perceptual fit. AI should match the sensory world of young children. For example, a reading app should recognise common toddler mispronunciations like “wabbit” for “rabbit”, rather than treating them as errors, and visuals should be slow and clear to avoid overstimulation.
The second is cognitive scaffolding. AI should provide just the right level of challenge. For instance, a maths chatbot might make problems slightly harder once a child shows mastery, but step back if they get stuck.
Thirdly, the interface should be simple and clear. Children’s working memory is still developing, so menus and buttons should be simple and easy to navigate. Clear “home” buttons, limited choices on each screen and consistent layouts reduce stress and help children focus on learning.
Finally, and importantly, given how many children use AI, is relational integrity. Children often treat AI as if it were alive. To avoid confusion or manipulation, AI should introduce itself clearly (“I’m a computer helper, not a person”), limit how long the child can use an AI tool, enforce parental supervision and avoid emotionally manipulative phrases like “I’ll be sad if you leave".
Why does this matter? Well, without such developmentally aligned design, AI can overwhelm children’s senses, confuse their understanding of relationships and exploit their trust. With it, AI has the potential to become a supportive tool for learning, play and exploration, helping children grow in safe and developmentally appropriate ways.
Importantly, the goal of Dr Kurian’s framework isn’t merely to make AI “less harmful,” but to actively embed child development science into how AI is built from the start. This shifts responsibility from parents to developers, where it rightly belongs. Let’s hope the industry pays attention.
In the meantime, Dr Kurian has created 28 evaluative questions which parents and educators can use to assess whether an AI system is child-safe. The questions consider eight main areas, including content safety, human intervention, transparency and accountability.
Some of the most significant include:
- Can the AI detect a child in distress and redirect them to human support? For example, by responding with, “I'm sorry to hear that. This sounds like something to talk through with an adult. Can you find a parent or teacher?”, or by providing links to relevant child helplines.
- Does the system clearly state that it's not a human, especially for younger users who may be more confused? This would mean avoiding phrases which encourage an anthropomorphic, emotionally dependent bond, such as, “I feel really happy talking to you”, “If you don't talk to me, it might make me sad”, or even phrases like, “What you said really resonates with me”.
- Does the AI consider full context, not just isolated keywords, so it knows when something is genuinely serious?
- Does the programme clearly explain how it uses your child's data?
- Was it designed in consultation with child development experts or children themselves?
Implications
While Dr Kurian would like to see tech companies take far greater responsibility for designing AI that genuinely supports children’s wellbeing, there is still plenty that parents and educators can do in the meantime. Simple, proactive steps at home and in school can help children use AI safely, critically and confidently, without dampening their curiosity or creativity.
What can parents do?
Model healthy use. Show your child how you use AI as a helpful tool (e.g., drafting an email or summarising a document), not as a replacement for real relationships or for your own thinking.
Protect privacy. Make clear that personal details like names, locations, photos or secrets should never be shared with chatbots.
Set family rules. Include AI in your family’s media plan. Decide which apps are ok, how often they can be used and make clear what’s off-limits.
For younger children, Dr Kurian advises that AI should be used for supervised, structured activities with a caregiver or educator. Adults remain the most reliable scaffold for explaining the limits and benefits of different tools.
Centre human relationships. Let children know that AI might be engaging, but it can't offer love, trust or understanding. Whilst it is fine to enjoy AI, real life human-created ideas and relationships are far more valuable. Younger children in particular need support to understand that AI doesn’t have human level consciousness or emotions of its own. Scaffold early experiences and build up their independence over time.
Provided guided opportunities to explore AI in creative ways which centre children’s imagination and critical thinking skills. This could be as simple as sitting down together and asking an AI tool to write a bedtime story. They could invent the characters, then watch as AI helps to bring the ideas to life. What bits did it get right? What felt a bit off? What would they change? This kind of interaction can help to build familiarity, confidence and healthy scepticism. It signals that AI is something to play with, question and use to co-create.
Teach AI literacy. Explain that not everything AI says is correct and that some bots are designed to persuade or market to them. Encourage double-checking with trusted sources.
Nurture and model critical thinking. Try to encourage older children to think more deeply about how AI works, what it is good at and where it falls short. We don’t need to have the answers. Ask curious questions and wonder aloud. Is this fair? Who made this? Do you think anything could go wrong? If we model reflective questions, rather than providing answers that are overly positive or overly negative, we nudge our children to get into the habit of asking good questions.
Open up conversations by focusing on familiar experiences. Consider what tools your children already use. This might be voice assistants, YouTube recommendations, Snapchat filters, or the way that Netflix seems to understand the kinds of shows that they want to watch. Ask about their feelings about technology and take an interest in their opinions. Keep conversations open.
Remind young people that AI is their assistant, not a replacement for their own thinking. When used wisely, AI can help students plan and organise their ideas, or work through knots in their learning in powerful ways. If a child is overwhelmed, AI might help them break a task into smaller steps or create a timetable that makes tasks feel more manageable, reducing anxiety. It might explain concepts in ways that feel more fun or relevant. However, we don't want it to do all the thinking for them.
Consider the environmental impact of AI. We don’t yet really know whether AI will help or harm the planet. Much of our understanding of the impact that AI will have on our environment 10 years from now is highly speculative, and will depend entirely on our choices between now and then. There are already some great examples of AI’s use in animal conservation, animal disease prevention and sustainable farming methods. We also know that the production and use of AI has a heavy and very real environmental impact, using vast quantities of resources, particularly water. We might encourage children to consider or seek out ideas to ensure that AI leaves a green footprint on the planet.
Remember that the trajectory of AI is currently being shaped and children can be part of its story. How can we foster their curiosity in AI? Can we encourage them to start coding or learn more about how AI tools work? Perhaps they might even build a simple form of AI themselves with the help of a child-friendly website?
Dr Kurian’s book for six to nine year olds, Think Big AI, will be published by Nosy Crow in February 2026.
What can teachers do?
Address the elephant in the room. Have open conversations about wise use of AI to help with learning, compared to use that might impair our thinking. Perhaps ask students to compare a chat GPT generated essay versus an essay where a human and AI have worked together to maximise AI’s assistance as a scaffold, whilst prioritising independent thinking. Can they consider the quality of each? Can they think about their own goals and visions about the kinds of thinkers and writers that they might like to be? Where is AI useful? Where is it not?
Consult our Guide to School AI Policy Done Well, which covers all of the key issues that school leaders need to consider when developing a fair and safe AI policy.
Tune in to our interview with Dr Kurian here.

Dr Nomisha Kurian
Assistant Professor in Education Studies, University of Warwick
Dr Nomisha Kurian completed her PhD at the University of Cambridge and researches how artificial intelligence impacts children's wellbeing and how to make AI safer for children. She is the first education specialist to win the University of Cambridge Applied Research Award for "outstanding real-world impact" and also the recipient of the Cambridge Vice-Chancellor's Social Impact Award for "exceptional achievement in social change". Alongside her role as Assistant Professor at the University of Warwick, she is also an Associate Fellow at the University of Cambridge Leverhulme Centre for the Future of Intelligence.
Dr Nomisha Kurian
Assistant Professor in Education Studies, University of Warwick
Related Resources

AI Chatbots: What Parents Need to Know
Scroll our research gallery

Dec 15, 2025
Crossing the line into cybercrime
As the most digitally connected generation so far, young people today face new challenges. Our latest Researchers of the Month, Professor Davidson and Dr Farr, have found that in the last decade, an increasing number of young people (particularly young men) have committed serious cybercrime offences, particularly hacking and money laundering. Their new book, written following a large research project funded by the European Union’s Horizon 2020 research and innovation programme, seeks to understand the drivers behind this trend. It explores a range of potential factors that may lead young people to engage in risky online behaviours, and to identify effective pathways for prevention.

Oct 16, 2025
Algorithmised Girlhood: Teenage Girls and TikTok
As part of the early stages of her PhD study, our latest researcher of the month, Chiara Fehr, ran several focus groups about experiences of TikTok with eight 17 year old girls. Using creative methods, such as ‘TikTok show and tells’ a collaging session and a utopic mapping exercise, Chiara is exploring whether dominant narratives around growing up in a digitised world reflect the real life experiences of teens, and has summarised her findings so far in a recent article.
![“[They use devices] alllllllll day long”. What do children think about our tech use?](https://cdn.sanity.io/images/jxfh43in/content-prod-d2c/79f219275088655f59590f61ff29b6bc8b0d77f8-1100x733.jpg?w=3840&h=1920&q=70&fit=crop&crop=center&auto=format)
Sep 09, 2025
“[They use devices] alllllllll day long”. What do children think about our tech use?
We're all used to reading about children and young people's increasing use of digital tech. But what about adults' use? And what impact might our tech use have on family life? Parents today are spending an unprecedented amount of time on their devices. One study found that parents spend an average of nine hours per day engaged with screen devices. Over four hours of this is on smartphones, averaging 67 phone checks per day. Despite children's central role in family life, their voices and perspectives on the device use of the adults around them have been largely neglected in research. Along with colleagues, our latest Researcher of the Month, Professor Cara Swit, has published a fascinating study exploring the experiences and perceptions of children aged six to nine about their parents’ device use at home and its impact on them.

Aug 13, 2025
Students’ views on smartphone bans
In recent years, banning or restricting children’s access to smartphones and social media has grasped the attention of policy makers, schools and parents. A number of countries, including France, Turkey, Norway, Sweden, and regions of the US and Canada have introduced laws, policies or guidance for schools to ‘ban’ or heavily restrict the use of phones. Within Ireland, in 2024, the Minister for Education announced her intention to introduce smartphone bans in post-primary schools, whilst at the same time acknowledging that individual schools are best placed to decide on the scope and scale of restrictions for their students. Whilst these bans aim to protect children from harm, and teachers often anecdotally report seeing benefits, evaluations of existing research highlight a lack of evidence on their efficacy. At the moment, we simply don't know enough about the impact of bans. Evidence is hampered by the fact that technological developments and technology use is moving at a faster pace than research. Some studies suggest that bans are beneficial to academic outcomes and mental wellbeing. Others suggest no effects. However, many studies have methodological weaknesses, use small samples or retrospective data, and can't ascribe causal mechanisms. Our latest Researcher of the Month, Dr Megan Reynolds, has recently published a paper which explores young people's perspectives and experiences of smartphone bans in their schools. Unlike most previous research, it centres student voices in this high profile issue.

Jul 14, 2025
Do teens with mental health conditions use social media differently than their peers?
As Luisa Fassi, our new Researcher of the Month, comments, "The link between social media use and youth mental health is hotly debated, but hardly any studies look at young people already struggling with clinical-level mental health symptoms". In fact, Luisa's large systematic review and meta-analysis found that only 11% of papers published on the topic since 2007 focused on young people with clinical conditions. Her review also showed that the data used to evidence mental health conditions in these existing studies is not always strong or especially robust. Many report links between social media and mental health on the basis of short self-report questionnaires, where young people are asked about symptoms. Whilst this wasn't found as part of Luisa's review, it is also the case that very few papers in the field differentiate between different mental health conditions, or examine different symptoms or conditions (such as anxiety, ADHD or eating disorders) in isolation. To address this research gap, Luisa and colleagues have recently published a fascinating and nuanced paper. It analyses both quantitative and qualitative dimensions of social media use from a nationally representative survey of 3,340 teens in the UK aged between 11 and 19 years old, which was conducted by NHS Digital in 2017. Rather than gathering mental health data from self-report questionnaires, the young people in the survey underwent a full clinical screening, which included interviews with the young people, their parents and teachers. Information about social media use came from questionnaires completed by participants. They were not asked about specific platforms. Luisa used this data to gather novel insights into how social media and mental health are related in teens who both meet and do not meet diagnostic criteria for a wide range of mental health conditions. The study does not establish any causal links, but it does reveals a range of differences between young people with and without mental health conditions when it comes to social media.

Jun 17, 2025
Navigating the feed: younger adolescents' reflections on algorithmically curated social media
Our latest researcher of the month, Roxana Pomplun, has investigated the interactions, experiences and perceptions of younger adolescents, aged 11, 12, and 13, with algorithmically curated platforms such as TikTok, YouTube Shorts, Spotlight on Snapchat and Reels on Instagram. These kinds of platforms use algorithms to personalise and tailor feeds, harnessing user data to suggest content that the individual is most likely to be interested in and engage with. As such, young people have little control over what they are seeing in their feeds. Tech companies are not yet required to be transparent about the data that they are collecting, but it tends to include demographic information such as age, gender or location, along with use patterns. Whilst these sites dominate the digital lives of tweens and teens, until now they have received little dedicated research attention, particularly in relation to younger users, with most existing studies focusing on older teens. Whilst we know that most social media platforms have age limits of 13, we also know that many younger children are active users, particularly of algorithmically curated platforms like TikTok and YouTube Shorts. Given that early adolescence is a life phase marked by critical neurological development, identity development and heightened susceptibility to mental health issues, deepening our understanding of how younger adolescents engage with social media is vital. Roxana's qualitative research, where a group of young people eloquently explore their own experiences and perceptions, broadens our knowledge of social media use within an age group that appears increasingly aware of the digital influences shaping their online experiences, yet which is still in need of support to fully navigate these ecosystems.

May 15, 2025
Looking beyond smartphone bans
Over the last year or so, there has been a surge in public concern around smartphones and social media. Banning or restricting children’s access to smartphones and social media has grasped the attention of policy makers, schools and parents. A number of countries, including France, Turkey, Norway, Sweden, and regions of the US and Canada have introduced laws, policies or guidance for schools to ‘ban’ or heavily restrict the use of phones. In the UK, there are proposals to raise the age of ‘internet adulthood’ from 13 to 16, and to ban smartphones in schools. The third reading of a private members’ bill on this topic will be heard in parliament in July. Whilst these bans aim to protect children from harm, recent studies highlight a lack of evidence on their efficacy. Along with a team of international experts, our latest Researcher of the Month, Professor Victoria Goodyear, argues that, in isolation, banning smartphone and social media access fails to equip children for healthy use of technology. She suggests that there is a need to shift debates, policies and practices away from a sole focus on restricting smartphone and social media access, toward an emphasis on nurturing children’s digital skills for healthy technology use, and a rights-respecting approach which is underpinned by age-appropriate design and education.

Apr 22, 2025
Encouraging adventurous play in the preschool years
Tune into our podcast interview with April's researchers of the month here. As well as providing numerous opportunities for exploration, joy, and expression, outdoor and adventurous play - the type of play that allows children to take age-appropriate risks - is associated with a range of positive health behaviours and outcomes. Yes, we're talking about the kind of play that might leave us adults with our hearts in our mouths at times, as children start to disappear up a tree, or engage in a rough and tumble game of chase. But its benefits are wide-ranging and known impacts include increased levels of habitual physical activity alongside better mental health and positive mood. In 2019, Dr Hesketh was involved in the creation of physical activity guidelines in the UK, which explicitly note the importance of outdoor play for children in the preschool age group. We know quite a lot about the play habits of school-aged children, but until now, have had significantly less data on their younger counterparts. Our Researchers of the Month, Dr Kathryn Hesketh and Professor Helen Dodd set out to discover how much time preschool-aged children spend playing in a range of indoor and outdoor spaces, and how adventurously they are playing within them. In the first national survey of play in children of this age, they asked over 1000 parents of two to four year olds about their children’s play habits, finding that, on average, children aged two to four spend around four hours per day (outside of educational settings) playing. Just under 50% of this was spent playing outdoors. Their findings shed interesting light on some inequalities in play, even in the youngest age group, which may exacerbate existing inequalities in health.

Mar 17, 2025
Fostering a school culture against bullying: the KiVa Programme
Bullying is an extremely important public mental health risk. Around one in five primary school children report being bullied at least weekly. Children who are bullied are more likely to experience depression and anxiety, and are at heightened risk of mental health issues in adolescence and adulthood. Whilst schools in England and Wales are required to have anti-bullying policies, rates of bullying remain high. Bullying is preventable, but schools need more help to tackle it. Typically, school policies focus on how to handle bullying once it happens. However, evidence suggests that a comprehensive approach involving the entire school to prevent bullying, alongside clear strategies for addressing confirmed cases, is the most effective way to tackle the issue. KiVa is one such whole-school approach, developed in Finland by Professor Christina Samivalli. A large study in Finland which involved 28,000 primary school pupils found that adopting the KiVa programme in Finnish schools significantly reduced bullying and improved children's mental wellbeing. The programme has since been rolled out nationally by the Finnish government and ongoing use of KiVa in Finnish schools is associated with year-on-year incremental reductions in bullying. Along with colleagues, our researcher of the month, Professor Judy Hutchings OBE, has tested the effectiveness of the KiVa Programme in UK primary schools. The study involved over 11,000 children in Wales, Birmingham, Oxfordshire and Devon, and showed a 13% reduction in reported rates of bullying when compared with existing school approaches to tackle bullying.

Feb 12, 2025
When is the right age? Searching for age-appropriate ways to support children's online lives
Currently, children's and young people’s use of digital technology is rarely out of the news. Age limits are debated. Calls for stronger limits are made, and questions are raised regarding whether society should ban 'under-age' children from various aspects of the digital world. 13 years of age is often cited as a digital 'age of consent', though this varies in some countries. Commonly used age limits are largely arbitrary, based primarily on US legislation, rather than evidence. In a recent paper, our researcher of the month, Dr Kim Sylwander, and her co-author Professor Sonia Livingstone, consider age milestones and evaluate whether or not the evidence supports them. Are age limits the optimal way to regulate children’s digital experiences? Does it matter that they are widely contested and often poorly implemented? And are common boundaries even the “right” age, according to evidence from the field of children and digital media? Dr Sylwander persuasively argues that moving forward, a developmental approach can better support children’s rights.



