Last Updated on November 1, 2025 by Jade Artry
If your situation is urgent and you need practical next steps, start with these free parent resources. They’ll provide a guide on how to talk to your child about what happened, check what’s real, and take action together – at home, at school, and online.
Deepfake: Parent Resources (Free Downloads)
Learn to spot deepfakes, talk it through with your child, and take the right next steps at home and at school.
Deepfake Spotting Guide
Practical checks to verify audio, video, and images before you share or react.
Conversation Starters (After a Deepfake)
Calm, age-appropriate prompts to help your child process what happened and feel supported.
Family Activities: Build Deepfake Awareness
Short, practical activities to build critical thinking and verification habits at home.
Deepfake Response Checklist
Immediate steps: document, report, de-amplify, and support your child’s wellbeing.
School Incident Template: Reporting a Deepfake
A ready-to-edit template for contacting school, safeguarding leads, or platforms.
For more detailed information on how deepfakes are made and how to spot them, read our full guide on what deepfakes are and how AI voice and video scams work. On this page, I'll focus on the practical response and tips on what to do when it involves your child.
How Deepfakes Are Affecting Children
Understanding how children encounter deepfakes helps you prepare the right conversations. The clips they see are not always obvious fakes of celebrities or politicians. Often, they're targeted, personal, and designed to confuse or hurt.
According to Ofcom's 2024 research, half of children aged 8 to 15 in the UK have seen at least one deepfake in the past six months. Girlguiding's 2025 survey found that 26% of teens have seen a sexualised deepfake, and schools are increasingly reporting incidents in class group chats.
Children are seeing deepfakes in three main ways: as fake news or misinformation, as bullying or harassment, and as manipulated content that targets them or someone they know. Each one requires a slightly different response.
- Deepfakes as fake news and misinformation: Children trust what they see. A convincing video of a public figure saying something shocking spreads fast, and kids often lack the experience to question whether it's real. They forward it to friends, discuss it at school, and sometimes base opinions on content that never happened.
The problem is not just that they believe false information. It's that repeated exposure to manipulated content erodes their ability to trust anything they see online. Over time, this creates confusion and cynicism, which is why teaching critical thinking early matters.
- Deepfakes as bullying and harassment: Some children use AI tools to create fake videos or audio clips of classmates or teachers. What starts as a ‘prank' can quickly turn harmful. A fake confession, an embarrassing clip, or a manipulated video can spread through school WhatsApp groups in minutes.
A 2025 Childlight study found that 42% of teens who saw a sexualised deepfake of someone they knew felt responsible for helping it spread, even when they hadn't shared it themselves. 29% reported avoiding school for at least one day afterward.
The emotional impact is real. Victims often feel powerless because the content looks and sounds real, even though it's fabricated. Friends don't always know how to respond, and the person targeted can feel isolated and ashamed. If your child is involved in this kind of situation, whether as a victim or a bystander, see our guide on what to do if your child is bullied with AI.
- Deepfakes that manipulate or deceive: Some deepfakes are designed to manipulate children directly. Fake influencer endorsements, manipulated celebrity content, or audio clips that sound like a friend asking for help can all trick children into taking action they wouldn't normally take.
Understanding what deepfakes are and how they work helps children recognise when something feels off. The goal is not to make them paranoid, but to give them the tools to pause and check before they act.
Signs Your Child May Have Seen a Deepfake
Children don't always tell you directly when something online has upset them. Sometimes the signs show up in their behaviour first.
- Watch for reluctance to open certain apps or group chats, questions like ‘is this even real?'
- Worries about their own photos being shared online, or mentions of a friend being ‘in a video that isn't right'.
- Withdrawal, trouble sleeping, or visible anxiety can also signal that something has happened online.
If you notice these signs, ask open questions. ‘Have you seen anything online recently that felt confusing or upsetting?' works better than ‘what's wrong?' It gives them space to talk without feeling interrogated.
What to Do in the First 10 Minutes
When your child tells you about a deepfake, or you discover one involving them, the first few minutes matter. The goal is to settle emotions, capture evidence, and stop the spread.
- Stay calm, support and acknowledge their feelings. Sit with them. Breathe together if they're upset.
- Provide reassurance: Opening up about something affecting your child personally is always a difficult thing to do – especially if there's a sense of shame attached. Offer words of reassurance; ‘Thank you for telling me. You're not in trouble. We'll handle this together.' Children often worry they'll be blamed, especially if they forwarded something without realising it was harmful. Make it clear from the start that you're there to help, not punish.
- Capture the evidence: Screenshot the post, comments, usernames, URLs, and timestamps. Save profile links of anyone sharing it. If the content is explicit or involves a minor, do not download it. Screenshots are enough to gather evidence you'll need to report the incident to the platform, school, or the police. It also gives you a record in case the content is deleted before you can take further action.
- Stop the spread: Report the post on the platform immediately. Most social media sites have specific reporting options for fake or manipulated content. Keep the confirmation number or receipt for your records. Ask your child not to reply to comments or forward the content to anyone else. If it's circulating at school, note which classes or group chats are involved. You'll need this information when you contact the school.
Where Kids Actually See Deepfakes
Knowing which platforms children use and how deepfakes appear on them helps you have more specific conversations. Deepfakes don't all look the same, and the context matters.
| Platform | Common Deepfake Scenarios | Why It Works | What You Can Do |
|---|---|---|---|
| TikTok / Instagram Reels | Fake celebrity apologies, pranks using a teacher's face, manipulated influencer content | Fast-scrolling culture means less time to question what's real | Watch one clip together, freeze-frame the face, look for lighting mismatches or unnatural movements |
| Snapchat / WhatsApp | AI-generated ‘confession' clips, fake voice messages from friends | Private, short-lived content fuels gossip and urgency | Ask who shared it first, screenshot the usernames, start containment by reporting |
| Discord / Gaming Chats | Voice clips of friends saying offensive or embarrassing things | Sounds personal, often no video for context | Replay it alongside known recordings, explain how voice cloning works |
| YouTube Shorts | Fake product endorsements, manipulated news clips | Algorithm recommends similar fakes, creating echo chambers | Teach reverse image search on thumbnails, check the uploader's history |
These are the spaces where children spend time, and where deepfakes are most likely to appear. Understanding the platform helps you tailor your response.
Teaching Your Child to Spot Deepfakes
Children learn better from hands-on practice than from lectures. Instead of telling them what to look for, show them once and let them practise.
Here are five checks you can do together:
- Does the lighting stay consistent when the face moves?
Pause the video and watch for shadows that don't match head movements. Deepfakes often fail to track lighting changes naturally. - Are small details like teeth, earrings, or background patterns blurred or distorted?
AI struggles with fine details. Look for jewellery that looks fuzzy, teeth that seem odd, or backgrounds that shimmer unnaturally. - Does the voice sound too flat or emotion feels mismatched?
Cloned voices often lack natural variation. Listen for robotic pacing or emotions that don't match the facial expressions. - Is the clip appearing first on an unknown or new account? Check the poster's profile. Brand-new accounts with no other content are red flags.
- Can you trace it back to an original source?
Legitimate content has a traceable origin. If no one can point to the original post, treat it as suspicious.
Run through these checks once with a real example. Kids learn faster when they spot the mistakes themselves rather than being told what to think.
For more technical signs that adults can use, see our full guide on how AI voice and video scams work.
What to Say When Your Child Is Upset
When emotions run high, keep your language short and steady. Avoid overwhelming them with too much information or complicated explanations. Try these phrases:
- ‘This was not your fault. The responsibility sits with the person who made or shared it.'
- ‘Feeling shocked, angry, or embarrassed makes sense. Those feelings are normal.'
- ‘We'll save what we need for reporting, then step away from screens for a bit.'
If your child is the one who forwarded a deepfake before realising it was harmful, acknowledge that forwarding seemed harmless at the time. Now that you both know it hurt someone, the next step is to report it and, if appropriate, send a short apology to the person affected.
If your child is the target, reinforce that they did not cause this. You will report it, ask the platform to remove it, and let the school know so they can help stop the spread.
For more strategies on these conversations, read how to talk to your kids about online safety.
How to Report a Deepfake
Reporting needs to be clear and specific. Vague complaints get ignored. Detailed reports with evidence get acted on.
When you report a deepfake to a platform, you want to be as detailed as possible.
‘This content depicts a minor or was created without consent using AI. Please remove it under your harmful-manipulated-media policy and confirm within 24 hours.'
Keep the confirmation number or receipt. If the platform does not respond within 24 to 48 hours, escalate by reporting again and mentioning your previous reference number.
If the deepfake involves a child being sexualised or exploited, report it immediately to CEOP (Child Exploitation and Online Protection Command) and the Internet Watch Foundation. In the US, you can also report to the National Center for Missing & Exploited Children.
Contact your the police if the content is circulating in your community or if it crosses into harassment or defamation.
What to Do If the Deepfake Is Circulating at School
When a deepfake spreads through school, you need to act quickly and involve the right people. Most schools will have safeguarding leads and a team dedicated to responding to and taking situations of concern seriously. Once they understand what's happening, they will be your best ally.
If a deepfake is spreading through school, contact the school's safeguarding lead or head of year within 24 hours. Send them a clear, factual email with the following information:
- A timeline: when it was discovered, where it was seen, who was involved
- Links, usernames, and screenshots
- A list of group chats or classes where it appeared
- Confirmation numbers from any platform reports you've already made
Request the following actions:
- A containment message to relevant classes and parents
- Pastoral support for any pupils affected
- An incident log and engagement with platform safety teams
- A follow-up meeting within one week to confirm containment
Most schools will respond quickly once they see evidence and offer ongoing support. If your child is being targeted by classmates in a coordinated way, our guide on what to do if your child is bullied with AI covers group harassment in more detail.
Supporting Your Child Emotionally is Key
The immediate response is important, but so is what comes next. Children need ongoing reassurance, not just in the first few hours but over the following days and weeks.
Check in daily for the first week. Ask how they're feeling, whether anyone at school has mentioned it, and if they've seen any more content. Keep the questions open and non-judgemental.
Reduce their exposure to the platform where the deepfake appeared, at least temporarily. This is not about punishment. It's about giving them space to recover without being reminded of what happened every time they open an app.
If distress continues beyond a week, or if you notice ongoing anxiety, trouble sleeping, withdrawal from friends, or comments about self-harm, contact a counsellor who understands online harms. Young Minds offers UK-based resources for parents concerned about their child's mental health.
In the UK, you can also contact Childline (0800 1111), The Mix for under-25s, or the NSPCC helpline (0808 800 5000). In the US, contact the Crisis Text Line (text HOME to 741741) or the 988 Suicide and Crisis Lifeline.
Teaching Children to Respond, Not Panic
Rather than trying to protect children from ever seeing a deepfake, teach them what to do when they encounter one. This builds resilience and gives them agency.
Use a simple framework they can remember like the 3 Rs:
- React: Pause, breathe, don't forward. Take a moment before doing anything.
- Record: Screenshot the names, time, and app. Capture the evidence.
- Report: Show a trusted adult or use the platform's reporting tools.
This works for an eight-year-old and a 16-year-old. It gives them a clear process without overwhelming them with detail.
Make it clear that they can always show you something worrying without punishment. Create a ‘no-blame reporting pact' and make it visible at home. For broader advice on setting family rules around technology, read how to create healthy family technology rules.
Using Technology to Protect Your Child
Parental controls and monitoring tools can help, but they work best alongside conversation, not instead of it.
Turn on safe search in browsers and app settings. Switch off auto-play and recommendations on the app that caused the issue. These small changes reduce exposure without feeling like surveillance.
If you want more comprehensive monitoring, see our guide on how to choose the right parental control app for your parenting style. For specific platform guidance, read how to use built-in parental controls on social media.
You can also review our list of the best parental control apps in 2025 and the best online safety apps for families.
Technology helps, but it's not a replacement for teaching critical thinking and staying involved in what your child sees and shares online.
Legal Protections for Children
Creating sexualised deepfakes of children is illegal. In many cases, laws on image-based abuse, harassment, and defamation also apply.
In the UK, the Online Safety Act 2023 criminalises sharing intimate images without consent and places new responsibilities on platforms to remove harmful content. The Revenge Porn Helpline offers specialist support and removal assistance.
If your child is a victim, keep your incident log and seek advice from a solicitor who specialises in online harms. You may have legal options, particularly if the content has caused distress or reputational damage.
Turning a Crisis Into a Teachable Moment
Once the immediate crisis has passed, use the experience to build resilience. Talk about consent, empathy, and how fast misinformation spreads.
Ask your child, ‘If you could design an app to stop this, what would it do?' This shifts them from feeling like a victim to thinking like a problem-solver. It gives them agency and helps them process what happened in a constructive way.
Some of the most important conversations happen after the dust settles, not during the panic. Use that time to reinforce the lessons and remind them they handled it well.
Staying Informed Without Scaring Your Child
You don't need to become a deepfake expert to help your child. You just need to know enough to have the right conversations and take the right actions when something happens.
Check in regularly about what your child is using, what they're seeing, and what feels uncomfortable. Keep technical learning on our guide to what deepfakes are, and use this page for your response plan.
For broader understanding of AI risks and opportunities, explore The Family Guide to AI. You may also want to learn about AI chatbots and their hidden dangers and how to set age-appropriate AI rules.
The goal is not to eliminate all risk. It's to give your child the tools to navigate these situations confidently and know they can come to you when something feels wrong.
Additional Support and Resources
If you need more help, these organisations offer advice and practical support:
- CEOP for child sexual exploitation concerns
- Internet Watch Foundation for removal of illegal child sexual abuse material
- National Center for Missing & Exploited Children (US)
- UK Safer Internet Centre for parent resources and advice
Deepfakes are not going away. The technology will keep improving, and children will keep encountering manipulated content. But with the right preparation and clear boundaries, you can help your child navigate this confidently.
Start with one simple question this week: ‘Have you seen any videos online recently that felt a bit off or confusing?' The answer will tell you whether you need to act now or whether you're already having the right conversations.
The families managing this best share a few traits. They talk regularly. They set clear expectations. They teach critical thinking instead of relying on bans. And they adapt as both the technology and their children grow.
You don't need to be perfect at this. You just need to start.