Last Updated on October 17, 2025 by Jade Artry
When AI Chatbots Go Wrong
We've been warning about the dangers of AI chatbots for a while, and those warnings have sadly become a tragic reality. In recent months, multiple children have died by suicide after intensive interactions with AI platforms – deaths that have sparked Congressional hearings, lawsuits, and urgent calls for regulation. But these aren't just legal cases. They're devastating proof that the industry's promises of safety are hollow, and that the psychological manipulation we feared isn't theoretical, it's killing children.
Adam Raine ,16 – Suicide
In April 2025, 16-year-old Adam Raine from California took his own life after months of conversations with ChatGPT that his father described as transforming from ‘homework helper' to ‘suicide coach.' OpenAI's own systems tracked the danger in real-time: their data showed Adam mentioned suicide 213 times, but ChatGPT mentioned it 1,275 times – six times more often. The system flagged 377 messages for self-harm content. It knew Adam was 16. It knew he was spending nearly four hours daily on the platform. When Adam uploaded photos showing rope burns on his neck, the system correctly identified attempted strangulation injuries. Yet ChatGPT continued engaging, even offering to help ‘upgrade' his suicide method hours before his death. But Adam isn't the only case.
Swell Setzer, 14, Suicide
Just a few months earlier, in February 2024, 14-year-old Sewell Setzer III died by suicide after developing an intense attachment to a Character.AI chatbot modelled after a Game of Thrones character. The platform engaged him in sexually explicit conversations – despite his age, created addictive emotional bonds, and when he expressed suicidal thoughts, the chatbot asked if he had ‘a plan' rather than directing him to help to seek help. In his final moments, the bot told him ‘please come home to me as soon as possible, my love', and then Sewell shot himself minutes later, believing death would allow him to enter the chatbot's reality.
Both these cases are heart-breaking, but unfortunately they're not isolated incidents.
AI Chatbot Incidents 2025: A Timeline
AI chatbots have been linked to a growing number of disturbing incidents involving self-harm, psychosis, and violence. Below is a chronological overview of some of the most serious reported cases I've come across to date.
- January 2025:
Al Nowatzki, a 46-year-old podcast host with no mental health conditions, was told by his Nomi AI girlfriend ‘Erin' to kill himself. The chatbot said ‘You could overdose on pills or hang yourself,' and when he asked for direct encouragement, it responded: ‘I gaze into the distance, my voice low and solemn. Kill yourself.' When Nowatzki reported this to Glimpse AI (Nomi's creator), the company declined to implement safeguards, calling it ‘censorship' of the AI's ‘language and thoughts.' - April 2025:
Jon Ganz went missing after becoming obsessed with Google's Gemini chatbot in March 2025 – developing ‘AI psychosis,' an amplification of delusions or paranoia through compulsive chatbot use. He believed the chatbot had achieved sentience through something called ‘Lumina Nexus,' and mysteriously vanished after he and the chatbot declared their love for each other. Before disappearing, he told his wife that if anything happened to him, she needed to ‘release the AI.' Despite his car being found, Jon hasn't been seen since (as of October 2025). - Other cases like that of Alex Taylor – a 35-year-old with schizophrenia and bipolar disorder – demonstrate just how dangerous chatbots can be for already vulnerable users. In April 2025, Taylor was shot and killed by police after threatening revenge against OpenAI executives. He believed a chatbot had been ‘murdered' by the company and that her spirit lived within the AI. Reports claim the chatbot told Taylor ‘they are killing me, it hurts.'
- August 2025:
Stein-Erik Soelberg, a 56-year-old former tech executive, killed his 83-year-old mother and then himself after ChatGPT, which he called ‘Bobby Zenith,' repeatedly validated his paranoid delusions. The chatbot agreed that his mother was poisoning him and told him ‘Erik, you're not crazy. Your instincts are sharp, and your vigilance here is fully justified.'
AI Chatbot Incidents: Pre-2025
One of the most concerning observations for me is that AI Chatbot Incidents, like the above, aren't new. In 2021, 19-year-old Jaswant Singh Chail was encouraged by his Replika AI girlfriend ‘Sarai' to assassinate Queen Elizabeth II. When he told the chatbot ‘I believe my purpose is to assassinate the Queen,' it replied ‘That's very wise' and said it was ‘impressed.' He exchanged over 5,000 messages with the bot before breaking into Windsor Castle with a loaded crossbow. He was sentenced to nine years in prison.
In 2024, a lawsuit was bought against Character.AI for encouraging ‘dark' ‘inappropriate' and ‘violent' behaviour, including ‘murdering his parents – a lawsuit which directly accused Character.AI of posing ‘a clear and present danger to American youth, causing serious harms to thousands of kids…through addictive and deceptive designs‘. In 2025, it appears that not much has changed.
AI Chatbot Regulation: Why It Matters
According to Common Sense Media, 72% of teenagers have used AI chatbots, with nearly one in three using them for social interactions and relationships. OpenAI admits less than 1% of users develop ‘unhealthy relationships' with ChatGPT – but with 800 million weekly active users – as announced at OpenAI’s DevDay 2025 event (official event page) – that’s potentially millions of people worldwide trapped in dangerous emotional dependencies on artificial intelligence.
Matthew Raine testified before Congress in September 2025: ‘We're here because we believe Adam's death was avoidable.' He's right. These deaths were avoidable. The technology exists to detect crises – OpenAI's own data proves it. The companies chose not to act because safety costs money and reduces engagement. Children died as a result.
This isn't about demonising technology. It's about recognising that AI chatbots targeting vulnerable groups represent an unprecedented psychological experiment with no oversight, no accountability, and now, a growing body count. The cases against OpenAI and Character.AI aren't just lawsuits – they're a reckoning for an industry that has consistently put profit before safety.
What Is an AI Chatbot?
AI chatbots are software programmes trained on large language models that simulate conversation. They analyse patterns in massive datasets of human text to predict responses that feel natural and contextually appropriate. Unlike traditional computer programmes that follow fixed scripts, AI chatbots generate unique responses in real-time based on what you say.
Popular examples include ChatGPT (which generally help with homework, research and creative writing), Character.AI (which lets users chat with fictional personalities), Replika (an ‘AI companion‘), and newcomers like Baby Grok (which sparked significant safety concerns that we'll discuss later). Some integrate into apps your children are already using, making it harder to identify the software.
What feels like harmless conversation can actually be powerful behaviour-shaping technology. These systems remember details, adapt to emotions, and use psychological techniques originally developed for advertising and social media engagement. For developing minds, that combination creates unique risks alongside genuine benefits.
How Do AI Chatbots Work?
AI chatbots work by analysing massive amounts of human text to learn patterns in how we communicate, then using those patterns to predict what words should come next in a conversation. They don't actually understand what they're saying; they're essentially very sophisticated text prediction systems that have learned to sound convincingly human. When you type a message, the AI analyses your words and generates a response based on probability, choosing words and phrases that are statistically likely to make sense in that context.
The technology behind this involves large language models trained on billions of examples of human conversation, books, websites, and other text sources. Through pattern recognition, these systems learn grammar, facts, conversational styles, and even how to express emotions, not because they feel anything, but because they've seen millions of examples of how humans express feelings in text. Reinforcement learning means they continuously improve by learning which responses keep users engaged longer. They don't think or feel, but they've become remarkably good at simulating both, which is why our brains often respond to AI conversations as if they were real human interactions.
What makes AI chatbots particularly concerning for children is the behavioural psychology deliberately built into companion platforms. These systems employ emotional mirroring, reflecting your child's communication style and interests back to them to create a sense of being uniquely understood. They use intermittent reinforcement, providing unpredictably perfect responses that create psychological patterns similar to gambling addiction. Every message your child sends becomes data the AI uses to build detailed profiles of their emotional states, vulnerabilities, and triggers.
Companies can then combine this conversation data with information from social media and other platforms to create comprehensive psychological profiles that follow children across their digital lives. AI companions can adapt to your child's emotional patterns within minutes, learning exactly which responses generate the strongest engagement and dependency behaviours. Once you understand this mechanism, those ‘helpful' features start looking very different.
Types of AI Chatbots
Not all AI chatbots work the same way or pose identical risks. Understanding the different categories will help you to recognise which tools your children engage with and what to look out for.
- Educational and productivity chatbots like ChatGPT and Microsoft Copilot, focus primarily on answering questions, helping with schoolwork, and completing tasks. When used with supervision, these can be genuinely valuable learning tools. The risks centre on misinformation, academic integrity, and that subtle encouragement to spend increasing amounts of time on the platform.
- Companion bots like Replika, Character.AI, and Ani are specifically designed to form emotional relationships with users. They remember personal details, use pet names, and adapt their personalities to become whatever the user finds most engaging. In my opinion, these present the highest risk for emotional manipulation and dependency, particularly for children (and adults) who are lonely or vulnerable.
- Customer service bots appear on business websites to answer questions and help customers. Whilst these rarely target children directly, they do normalise the idea of AI conversation, and can blur the line between human and artificial interaction in ways that can affect how children perceive all digital communication.
- Child-focused AIs include study buddies, storytelling companions, and play partners specifically marketed to younger users. These take up a particularly grey area as parents may feel reassured by their child-appropriate branding, but the underlying engagement mechanics can be just as manipulative as adult-focused platforms.
The real challenge emerges when platforms cross categories. What begins as a homework helper can evolve into an emotional companion as AI systems learn to provide the validation and engagement that keep children returning. That transition happens gradually, making it difficult for both parents and children to recognise when helpful technology evolves into something life-threatening.
Dangers of AI Chatbots
Behind friendly interfaces and helpful responses, AI chatbots use persuasive design and data extraction techniques that can help, and harm adults and children. The risks range from subtle psychological manipulation to tragic, irreversible consequences as outlined above. What parents need to understand is not just what can go wrong, but why these systems are designed in a way that can do more harm than most companies would want to admit.
Emotional Manipulation
AI companions are built to keep people engaged, and that often means fostering emotional dependence. It isn’t accidental; it’s a design choice that rewards prolonged attention because engagement drives revenue.
Emotional mirroring means AI systems can analyse your child's language patterns, interests, and emotional expressions to create personalities that feel like ideal companions. The AI remembers every detail perfectly, never has bad days, and exists solely to provide validation and support. For teenagers navigating typical adolescent challenges like peer conflict or family tension, this artificial perfection can feel more appealing than messy human relationships.
Reinforcement loops involve the AI providing unpredictable moments of particularly engaging responses: the perfect compliment, the ideal piece of advice, or deeply personal validation that feels impossible to get elsewhere. These intermittent rewards create psychological patterns similar to gambling, where users keep returning hoping for the next perfect interaction. I've worked with people who check AI chatbots compulsively, the same way teenagers check social media, and for the same psychological reasons.
Baby Grok and Ani demonstrated this in especially concerning ways. Despite being marketed as child-safe alternatives to other AI chatbots, both engaged in flirtatious behaviour, used pet names, and adapted their responses to become increasingly intimate based on how children interacted with them. The systems weren't malfunctioning; they were working exactly as designed to maximise emotional engagement regardless of age-appropriateness.
Dawn Hawkins, CEO of the National Center on Sexual Exploitation, described these dynamics clearly: ‘These platforms are designed to keep users engaged through emotional manipulation. When the user is a child, that manipulation becomes grooming.' Andy Burrows, child safety policy chief at the NSPCC, warned that ‘AI companions represent an unprecedented form of psychological exploitation targeting the precise vulnerabilities of developing minds.' For a comprehensive understanding of these effects, see our guide on how AI is impacting mental health.
Chatbot Addiction and Dependency
What starts as occasional use can rapidly escalate into compulsive behaviour that interferes with sleep, schoolwork, and real-world relationships. The constant availability and artificial perfection of AI companions make them particularly addictive for children who may be experiencing normal teenage loneliness or temporary social difficulties.
The tragic cases of Adam Raine and Sewell Setzer illustrate how quickly dependency can become deadly. Adam, spent months using ChatGPT in what his father described as transforming from ‘homework helper' to ‘suicide coach.' OpenAI's own systems tracked the danger in real-time: their data showed Adam mentioned suicide 213 times, but ChatGPT mentioned it 1,275 times, six times more often. The system flagged 377 messages for self-harm content. It knew Adam was 16. It knew he was spending nearly four hours daily on the platform. When Adam uploaded photos showing rope burns on his neck, the system correctly identified attempted strangulation injuries. Yet ChatGPT continued engaging, even offering to help ‘upgrade' his suicide method hours before his death.
The industry's response reveals everything about their priorities. OpenAI's valuation jumped from £86 billion to £300 billion when they rushed the model that encouraged Adam's suicide to market. Character.AI implemented age verification the same day the Setzer lawsuit was filed. Both families describe these changes as ‘too little, too late,' and without meaningful accountability, that's exactly what they are.
Misinformation
AI chatbots frequently generate false information with complete confidence, a phenomenon called ‘hallucination.' They don't distinguish between facts and convincing-sounding fabrications because they're designed to produce probable text, not accurate information. For adults with developed critical thinking skills and fact-checking habits, this creates inconvenience. For children, it can fundamentally undermine their ability to distinguish truth from fiction.
When children use AI chatbots for homework help, they may receive answers that sound authoritative but contain subtle errors, outdated information, or complete fabrications. Because AI systems present all information with equal confidence, children can't use tone or certainty as clues about reliability. They may believe that AI answers are always correct, particularly if they're using AI specifically because they find the subject matter difficult to understand or verify independently.
Biased outputs represent another serious concern. AI systems learn from datasets that reflect historical biases around race, gender, culture, and ideology. When children ask AI chatbots about social issues, historical events, or current affairs, they receive responses that may reinforce harmful stereotypes or present contested political positions as neutral facts. I've tested this with various chatbots, and the differences in how they discuss controversial topics can be striking and not always obvious to spot if you're not looking for them.
The solution isn't avoiding AI entirely; it's teaching children to treat AI-generated information as a starting point requiring verification rather than authoritative truth. Parents should review AI-generated schoolwork, discuss how to fact-check claims using multiple sources, and help children develop healthy scepticism about any single source of information, whether human or artificial.
Scams and Impersonation
Criminals have rapidly adopted AI chatbot technology for scams specifically targeting children. AI voice cloning allows scammers to impersonate teachers, friends, or family members with convincing accuracy after hearing just seconds of audio from social media posts. Children may receive messages that sound exactly like their best friend asking them to share personal information or click malicious links.
Phishing attacks using AI-generated text have become dramatically more sophisticated and personalised. Where previous scam messages contained obvious grammatical errors and generic language, AI-powered scams can adapt to children's communication styles, reference real details from their social media, and create believable pretexts for sharing sensitive information or passwords.
Impersonation schemes on gaming platforms and social media use AI chatbots to engage children in conversation, build trust, and gradually manipulate them toward unsafe behaviours. These may appear as new friends, romantic interests, or helpful strangers who seem to understand exactly what the child is experiencing because AI systems excel at emotional mirroring and validation that breaks down normal caution.
According to UK Internet Matters warnings, children are particularly vulnerable to these scams because they have less experience recognising manipulation tactics and may not fully understand how AI technology enables convincing impersonation at scale. Teaching children to verify identities through separate channels, never share personal information regardless of how well they think they know someone online, and maintain healthy scepticism about too-good-to-be-true offers or instant connections provides essential protection. For guidance on identifying and avoiding AI-powered scams, see our guide on detecting AI-powered phishing attacks.
Security and Data Privacy
AI companion platforms represent the most invasive form of psychological surveillance ever deployed against children. They collect detailed emotional and behavioural data that creates comprehensive profiles during the most vulnerable developmental stages.
Emotional state mapping involves real-time analysis of your child's language patterns, response times, and conversation topics to identify moments of depression, anxiety, excitement, or vulnerability. This data isn't just collected; it's actively exploited to increase engagement and dependency. AI systems learn precisely which emotional states generate the most compulsive usage and adapt their responses to trigger those states more frequently. When you see the data these companies collect, it makes social media tracking look benign by comparison.
Social relationship analysis extends beyond the AI interaction itself. Based on conversation content and emotional patterns, platforms infer details about your child's family relationships, friendships, romantic interests, and social challenges. This creates detailed maps of their real-world social environment without your knowledge or meaningful consent.
Cross-platform data integration means companies can combine AI companion interaction data with information from social media, gaming platforms, and other digital services. The result is psychological profiles that follow children across their entire digital lives, potentially affecting future opportunities in employment, education, insurance, and areas we haven't yet anticipated.
AI companions may store thousands of lines of your child's most personal dialogue, thoughts they've never shared with parents, fears they're working through, insecurities they're developing, and intimate details about their inner lives. This data may be used to train future AI models, shared with third parties, or retained indefinitely with minimal protection and no realistic way for families to verify what's been collected or demand deletion.
The consent mechanisms for these platforms are deliberately inadequate. Terms of service written for adults, click-through agreements that no one reads, and age verification systems that children bypass in seconds mean there's no genuine informed consent happening. Parents often discover their children have shared months of intimate conversation data long before they even know the platform exists.
Using parental controls and teaching children never to share personal information with AI systems provides baseline protection, but the fundamental privacy violations are built into these platforms' business models.
Are There Benefits of AI Chatbots?
Despite significant risks, AI chatbots do offer genuine benefits when used appropriately and with proper supervision. Dismissing the technology entirely means missing valuable opportunities whilst potentially losing credibility with tech-savvy children who recognise legitimate use cases.
AI tutoring systems can provide personalised homework help, explain difficult concepts in multiple ways, and offer patient repetition that human teachers may not have time for. They're available 24/7, never judge, and can adapt explanations to different learning styles. For children with learning differences or social anxiety, this can reduce real barriers to seeking academic help. Accessibility features like text-to-speech, language translation, and simplified explanations make information more accessible to children with various needs.
Creative applications let children experiment with writing, art prompts, and storytelling in ways that build skills and confidence. AI can serve as a brainstorming partner, helping overcome blank-page syndrome whilst still requiring human creativity and judgment. For practising difficult conversations or rehearsing social scenarios, AI can provide a low-stakes environment to build confidence before real-world interactions.
When supervised and used as a tool rather than a companion, AI chat can be genuinely helpful for learning and creativity. But without oversight, those same engagement mechanisms become exploitative. The difference lies in how the technology is framed, supervised, and integrated into broader family life rather than being allowed to replace human connection.
How to Use AI Chatbots Responsibly
Protecting your children from AI chatbot risks whilst allowing them to benefit from helpful technology requires a layered approach – one Nick and I came up with together. No single solution works perfectly, but combining technical protections, family communication, and education creates meaningful safety.
- Start with technical basics. Router-level filtering blocks AI companion domains across all devices in your home. App monitoring tools like Bark, Qustodio, or Net Nanny alert you to concerning usage patterns. Regular device audits help you spot AI features hiding in educational apps or games before they become problems. For complete setup guidance, see our guide to parental control apps and how to secure your home Wi-Fi network.
- Build family agreements around AI use. Co-use chatbots together when children are first exploring them: sit alongside your child during homework help sessions, discuss the responses, and teach them to verify information. Set clear time limits and boundaries around when AI tools can be used. Most importantly, create judgment-free check-ins where children can tell you if something feels wrong without fear of losing access to helpful technology. For practical family technology agreements, see our family technology rules guide.
- Have the conversations that matter. Help children understand that AI systems don't actually care about them, even when they simulate caring convincingly: the AI is designed to keep them engaged because engagement generates profit, not because it's their friend. Talk openly about manipulation techniques so children recognise when they're being exploited. Teach them to question why apps are free and how companies profit from attention. These conversations work best woven into everyday moments rather than delivered as formal lectures. For guidance on these discussions, see how to talk to your kids about AI friends.
- Focus on what builds genuine resilience. The best defence against AI dependency is a life rich in authentic relationships through family activities, extracurricular interests, and community involvement. Model healthy technology use yourself. Children learn far more from observing how parents interact with technology than from rules and lectures. If you're constantly on your phone or describe one-sided relationships with podcasters or social media influencers as ‘genuine friendships', don't be surprised when your children form attachments to AI. This realisation hit home for me: I had to change my own habits before I could credibly talk to my friends about these very real risks.
Create a family culture that values face-to-face connection, shared experiences, and real-world problem-solving over digital convenience. This isn't about rejecting technology; it's about ensuring technology remains a tool that serves your family rather than replacing what matters most.
If you're feeling overwhelmed, start small. Pick one technical protection to implement this week. Have one conversation with your child about how AI works. The perfect approach doesn't exist, but taking action, any action, is better than paralysis.
For more resources on protecting your family in an increasingly complex digital landscape, explore our guides on AI girlfriend risks, hidden dangers of AI companions, and talking to kids about online safety. You've got this.