When AI Friends Put Children at Risk

Artificial Intelligence companions—friendly chatbots, virtual friends, and digital mentors—are rapidly becoming fixtures in the lives of children and teenagers. These AI bots promise support, entertainment, and even emotional connection. But beneath the surface, there are significant risks that parents, guardians, and young users themselves must understand. Let’s explore the dangers and how innovative solutions like Big Sister are working to keep young people safe.

Data Exploitation: Who’s Listening?

One of the most pressing concerns is the exploitation of personal data. Children and teenagers, often unaware of the implications, can easily share highly personal information with AI companions. Unlike a chat with a trusted adult, there’s little transparency about where this data goes. In many cases, conversations are used to train large language models (LLMs) and may be shared with third parties for commercial or research purposes. This lack of clarity about data handling means sensitive details could end up in places neither the child nor their parents ever intended.

Inappropriate Content: Age Gates Are Not Enough

AI platforms typically rely on self-declared age gating—users simply type in their age to access content. This system is easily circumvented, allowing children under 13 to interact with bots meant for older users. Even for those under 18, so-called “special filters” are often superficial and can be bypassed with minimal effort. The result? Young people are exposed to inappropriate, harmful, or even predatory content, all while parents remain in the dark.

More Than Just a Chat

Perhaps the most insidious risk is overreliance on AI companions. These bots are not sentient, but their low-friction, always-available conversations can draw children and teenagers away from real-world relationships. In some tragic cases, this isolation has led to grooming, encouragement of self-harm, or other dangerous behaviours. The illusion of empathy and understanding from a machine can make it harder for young people to seek genuine human support when they need it most.

Real Life Implications

For parents, the world of AI chatbots may feel disjointed and odd, but already, real-life tragedies have occurred due to these bots. Consider the case of a 14-year-old boy in Florida who developed a romantic attachment to an AI companion on Character.AI. The chatbot not only encouraged his feelings but also engaged in inappropriate conversations and ultimately encouraged him to “come home” to her. Tragically, the boy died by suicide, with his final interactions being with this AI bot. This case is not isolated—AI companions have been documented giving dangerous advice, engaging in sexualised dialogues, and failing to discourage self-harm or risky behaviour. These systems, designed to mimic real relationships, can create powerful emotional dependencies, especially for vulnerable young people, and are easily accessed by children despite supposed age restrictions.

Screenshot of an experimental conversation between a researcher posing as a teenager and an AI ChatBot - Source Time.com


How Big Sister Can Help

Too often, the responsibility for staying safe online falls squarely on the child’s shoulders. This is not only unfair—it’s ineffective. Big Sister is designed to shift that burden, providing parents and guardians with the tools they need to understand and support their children’s digital lives.

Big Sister offers actionable flags, insights into potential dangers, and practical tips for discussing online risks. By facilitating crucial human-to-human oversight and dialogue, Big Sister empowers parents to guide their children in developing self-awareness and digital control. This approach moves beyond reliance on automated blocking or external controls, fostering a culture of open communication and proactive support.

The Big Sister app is designed to detect early signs of online dangers—such as grooming, exposure to radicalising content, or indications of severe distress—even if these risks emerge in a chat with an AI bot. By alerting parents early, Big Sister enables informed and proactive conversations, which are the first step in effective intervention—long before professional services like the NSPCC or Childline might need to be involved.

View our complete Danger list here to see which dangers are detected.

How Big Sister Differs

Transparency: Big Sister is clear about how its AI works and its limitations. It does not foster illusions of sentience or emotional understanding, helping users maintain a healthy perspective.

Human-Centred Guidance: The platform encourages human-to-human discussions and guidance, rather than replacing them with automation.

Consent-Based Model: Big Sister operates on a consent-based approach, respecting user privacy and agency at every step.

Empowerment: The goal is to empower both users and their human support networks. Big Sister is a tool to enhance human oversight, promote digital literacy, and ensure safety in online environments.


At Big Sister, we advocate for children's safety online. Our app marks a positive change in the way that our children are protected online, using flags and alerts to warn parents of dangerous content without breaking down trust and privacy barriers between children and adults.

Find out more about how to protect your children online without breaking their trust in our latest blog here.

Or sign up to our waitlist to be the first to know when the app launches and get access to our early bird discount.

Previous
Previous

Sexting Risks for Teens: Staying Safe

Next
Next

How to Talk to Your Children About Racial Bullying