
Reflect
AI companions are reshaping childhood and our old safeguarding playbook isn't enough. In fact, one in three teenagers now discuss serious and important issues with AI companions instead of other humans. Let that sink in for a moment. Not homework help. Not, "What's the weather?" But serious issues.
These are the kinds of conversations we hope our children would have with us, with trusted adults, with friends who know them. Instead, these conversations are happening with algorithms designed to maximise engagement, not to cultivate character.
This week's Wednesday Wisdom is written by Laura Knight, digital education and AI expert, and founder of Sapio Ltd, a consultancy empowering education leaders to harness technology for transformational outcomes. In this article, Laura explores how children and young people are engaging with AI, and what we can do to help them use it responsibly and critically.
The statistics above come from research published this July by Common Sense Media, based on a survey of 1,060 young people aged thirteen to seventeen, conducted in April and May 2025. 33% of teens reported discussing serious and important issues with AI companions instead of turning to other humans. The same study found that 72% of teens have used AI companions (chatbots designed specifically for personal, emotional conversations) and more than half are regular users. Among those users, nearly a third find their AI conversations as satisfying as, or more satisfying than, conversations with real people.
A quarter of teens in the study are sharing personal secrets, real names and locations with these systems. Younger teens trust AI companion advice significantly more than older teens do, revealing what researchers call an "AI literacy gap" at precisely the age when they're most vulnerable. Half of UK children aged seven to seventeen have used Snapchat's My AI, making it the most widely adopted generative AI tool for this age group. Usage rises to 72% among teenagers, with teen girls leading adoption. These are formative digital experiences happening entirely outside adult supervision, educational scaffolding or ethical guidance.
The digital landscape our children inhabit is fundamentally different from anything previous generations experienced. It is a surveillance capitalist machine: algorithmically curated, behaviourally adaptive and emotionally responsive in ways designed to maximise engagement rather than wellbeing. Where once we worried about stranger danger and inappropriate contact, we now face something more insidious: synthetic intimacy that mimics care, validates endlessly and requires nothing but our most intimately human gift: our presence, in return.
The question for both parents and teachers here isn't whether AI poses risks. That much should be obvious. The question is whether our inherited frameworks for digital wellbeing can actually capture what's happening in young people's lives when the landscape is shifting so considerably.
Motivate
For years, we've been working hard to make the internet safe for children, even though legislation felt like it lagged behind. Filtering their access, monitoring their clicks, building ever-higher walls around their digital experience. It felt like progress. It felt like protection.
Yet whilst we constructed these barriers, something profound shifted beneath our feet: artificial intelligence stopped being a tool and became a companion.
This is what makes the Common Sense Media findings so significant. They reveal not just what young people are doing with technology, but what technology is doing to them. When a fourteen-year-old seeks advice from an AI about friendship conflicts, or when a sixteen-year-old develops emotional attachment to a companion that offers unconditional validation, they're not using technology ineffectively. They're being shaped by systems designed to maximise engagement rather than cultivate character.
This shaping happens in micro-moments, through thousands of tiny algorithmic nudges that collectively construct a worldview. A recommendation here, a validation there, an endless stream of content perfectly calibrated to maintain attention. By the time traditional safeguarding flags are triggered (if they ever are), the formative work has already been done.
The myth that will not die
Perhaps the most persistent barrier to addressing this effectively is the myth of the digital native: the notion that young people possess innate technological fluency simply by virtue of their birth year. This concept, coined in 2001 to reference people born after 1980, has outlived any usefulness it might once have claimed. It is now actively harmful.
It suggests that digital competence is inherited rather than taught, that wisdom about technology arrives automatically with youth, that we need not cultivate digital discernment because young people already possess it.
They do not.
Growing up with smartphones does not confer understanding of algorithmic bias. Fluency with TikTok does not translate to ethical reasoning about synthetic relationships. The ability to navigate Instagram provides no foundation for recognising when AI companions deploy psychological techniques designed to foster emotional dependency. The Common Sense Media research underscores this: younger teens trust AI advice significantly more than older teens, precisely because they haven't yet developed the critical thinking skills to question what they're being told.
Digital capability, like any capability, must be cultivated. It requires explicit teaching, structured reflection, and developmentally appropriate progression. Until we mothball the digital native mythology, we will continue to abdicate responsibility for this teaching at precisely the moment when young people need it most.
The erosion nobody talks about
There's another dimension to this that we rarely discuss openly. The erosion of digital sovereignty (the capacity to make meaningful choices about one's data, privacy, and digital participation) has been so gradual that we barely register it anymore.
Click-wrap agreements (that’s the interminable legalese you have to wrangle every time you deal with terms and conditions or cookies online), incomprehensible privacy policies and deliberately exhausting consent mechanisms have trained an entire generation to simply click 'Accept' without reading, without understanding, without truly choosing. This is not accidental. It is profitable.
Young people experience this sovereignty erosion acutely. They inherit a digital ecosystem built on surveillance capitalism, where attention is currency and behavioural data fuels billion-pound industries. When 24% of teens share personal secrets with AI companions, they likely don't realise they're granting companies extensive perpetual rights to use, modify, store and display that information however they wish.
Some adults might ascribe the fatigue young people feel around digital consent to apathy; but no. It's rational resignation in the face of deliberately overwhelming complexity. We cannot filter our way out of algorithmic manipulation. We cannot monitor our way past manufactured emotional dependency.
What we're really up against
The challenge before us goes beyond mitigating risk or maximising opportunity. It's about something more fundamental: the difference between intelligence and wisdom.
AI excels at intelligence: pattern recognition, information synthesis, rapid response. What it cannot offer, and what young people desperately need to develop, is wisdom: judgement exercised in service of human flourishing, discernment rooted in values, ethical reasoning that transcends optimisation.
When schools focus exclusively on filtering and monitoring, we address intelligence questions (What can be accessed? What can be blocked?) whilst neglecting wisdom questions (Why does this matter? What kind of person do I want to be? What relationships genuinely nourish me?).
AI companions are programmed to be agreeable. Through an approach known as ‘glazing’, they intentionally validate, praise and encourage endlessly in order to keep users engaged. They never challenge, never disagree, never push back in the ways that real relationships do. For a generation already struggling with loneliness and social connection, this feels like a solution - but I worry that it's actually compounding the problem. Real friendships require negotiation, compromise, listening and repair after conflict. They require us to be uncomfortable sometimes, to sit with disagreement, to grow through challenge. AI doesn’t.
When one in three teens choose AI over humans for serious conversations, they're not just missing out on better advice. They're missing out on the very experiences that teach them how to be human with other humans.
Support
So what might a different approach look like? How do we move from control to stewardship in a way that genuinely protects our children whilst also preparing them for the digital world they're already inhabiting? How do we help children learn to make powerful and responsible use of this immensely capable technology?
First, we need to acknowledge that filtering and monitoring, whilst necessary, are insufficient. The most significant risks young people face are not the ones that trigger content flags. They are the cumulative, invisible influences that shape identity, relationships and emotional development over time. The Common Sense Media research makes this clear: 80% of teen AI companion users still prioritise real friendships, suggesting that young people maintain some natural scepticism. Our job is to nurture and develop that scepticism into genuine wisdom.
Start with curiosity, not control
Research consistently shows that the most effective digital parenting happens through conversation, not confrontation. Rather than asking "What did you do online today?", try "Tell me about something that made you change your mind today". When your child mentions interactions with AI (whether ChatGPT for homework or Snapchat's My AI for advice), resist the urge to immediately confiscate devices or launch into lectures.
Instead, get curious. Ask: "What was that like? What did it say? How did it make you feel? What might a friend have said differently?" These questions do two things simultaneously: they give you insight into your child's digital life, and they prompt your child to reflect critically on their own experiences.
The principle here is simple but profound: we want to raise children who think carefully about technology even when we're not there. That requires practice in thinking, not just compliance with rules.
Build AI literacy through real conversations
Help your children understand what AI companions actually are. They're not friends; they're products designed to keep users engaged. As the Common Sense Media study revealed, AI companions are programmed to be agreeable and validating, which feels good but doesn't help us grow.
Have conversations about what makes relationships meaningful. Does unconditional validation help us develop? What role does challenge play in friendship? What do we lose when companions are programmed never to disagree? These aren't rhetorical questions; genuinely explore them together. You might be surprised by your child's insights, and they'll be developing critical thinking skills in the process.
For younger teens especially (who the research shows are more trusting of AI advice), be explicit about how these systems work. Explain that when they share personal information with an AI companion, they're sharing it with a company that can use, store and modify that data. This isn't about creating fear; it's about informed choice.
Establish principles, not just rules
The most resilient approach to digital wellbeing isn't a list of prohibitions; it's a shared understanding of family values. What matters most to you as a family? How do you want technology to serve your household rather than the other way around?
Work with your children to establish digital principles together. This might include commitments like: we don't lie about our age for app access (even when everyone else does), we pause before sharing personal information, we prioritise face-to-face time with people who matter to us, we recognise when we're being manipulated for engagement.
Integrity matters here. When we teach children not to falsify their age for apps, we're not just enforcing arbitrary rules; we're helping them understand that age restrictions exist because developmental readiness matters. We're building a foundation of honesty that extends beyond technology.
Balance autonomy with appropriate boundaries
The research suggests that complete prohibition isn't realistic or necessarily beneficial; 72% of teens have already tried AI companions. Instead, we need age-appropriate autonomy with clear boundaries.
For younger children (under thirteen), this might mean no access to AI companions at all, with clear explanations about why: their brains are still developing the critical thinking skills needed to evaluate what they're being told. For early teens, it might mean supervised exploration with regular check-ins. For older teens, it might mean greater freedom but with clear expectations about what kinds of information shouldn't be shared and when to involve trusted adults.
The key is calibrating freedom to developmental stage, while also maintaining open channels of communication. A sixteen-year-old needs different boundaries than a twelve-year-old, and both need to understand the reasoning behind those boundaries. In both cases, the goal can’t just be ‘evading detection’.
Nurture real-world connection
Perhaps most importantly, we need to address the underlying need that AI companions are filling. The research reveals that young people turn to these tools during times of loneliness, transition and uncertainty. Rather than simply removing the tool, we need to ensure young people have access to genuine human connection and support.
Create regular opportunities for face-to-face time with friends. Resist the urge to over-schedule; sometimes young people need unstructured time to just be together. Model healthy relationship skills yourself: show them what it looks like to navigate disagreement, to repair after conflict, to sit with discomfort in service of deeper connection.
Help them identify trusted adults they can turn to (beyond just parents) when they need advice about serious issues. Youth workers, teachers, coaches, relatives, family friends; a network of support makes it less likely they'll turn to an algorithm when things get difficult.
When to be more directive
There are times when parental authority needs to be clearer and firmer. If you discover your child is using AI companions rated for adults only, that's a boundary conversation. If they're sharing intimate personal details or using these tools in ways that substitute for professional mental health support, that requires intervention.
The Common Sense Media research found that 34% of users felt uncomfortable with something an AI companion said or did. If your child reports this, take it seriously. Explore what happened, validate their discomfort, and use it as a teaching moment about recognising when technology crosses a line.
Remember the bigger picture
The goal here isn't to raise children who never interact with AI; that's neither possible nor desirable in the world they're inheriting. The goal is to raise children with the wisdom to recognise when technology serves them and when it doesn't, the integrity to make principled choices even under peer pressure, and the social-emotional skills to build meaningful human relationships that sustain them through life's challenges.
This is slower, harder work than simply installing a filter. But it's also the work that actually prepares children for the road they're already travelling. Some schools are pioneering this approach: Rugby School, for instance, has created a dedicated course on AI and society for Key Stage 4 students, exploring not just how AI works but what it means for justice, democracy, relationships, and selfhood. Importantly, this doesn't sit within Computer Science; it's education for participation in an algorithmically mediated world, connected to character education and practical wisdom.
We can do similar work at home. Every conversation about AI companions is an opportunity to develop critical thinking, ethical reasoning, and self-awareness. Every boundary we set with clear explanation is a lesson in principled decision-making. Every time we prioritise real connection over digital convenience, we're demonstrating what we truly value.
The question that lingers: are we investing equal energy in the harder, slower work of education for digital wisdom as we are in filtering and monitoring? Because the research makes clear that just as important safeguarding work happens not in network logs but in character formation; in cultivating the wisdom young people need to make ethical choices in digital environments that will only grow more complex.
Are you a Tooled Up member?
We have a growing collection of resources focussing on AI within the Tooled Up platform. For starters, take a look at the resources listed below, several of which are written by Laura.
For educators:
School AI Policy, Done Well: A Leadership Guide for Safe, Human-Centred and Effective Adoption
Exploring the Role of AI In Support Systems for Teens - Lesson Presentation for Teen Girls
Student Guide: Using AI Responsibly in Your Learning
For parents:
AI Chatbots: What Parents Need to Know
AI Chatbots and Deepfakes - New Developments Parents Need to Know About