The NSPCC is the UK's leading children protection charity, who believe that every childhood is worth fighting for.

Generative AI poses a variety of serious risks to children. Harm can arise when children consume AI-generated content, create it themselves, have their likeness used in deepfakes, or when they are targeted by bad actors using the technology. As these tools become more powerful, accessible, and embedded in everyday services, the NSPCC were worried risks are only set to grow, and EU and UK legislation are not keeping pace.

The NSPCC commissioned AWO to consult AI and child safety experts to identify the risks that Generative AI poses to children’s safety and the likely trajectory of those risks, and to identify any existing or potential mitigations.

  • Risks to children

    Based on interviews with 34 experts, AWO identified that children face seven categories of risk linked to Generative AI: Child Sexual Abuse and Exploitation Material (CSAM and CSEM); sexual extortion; sexual grooming; sexual harassment; non-sexual bullying; harmful content; and customised adverts and content.

  • Solutions

    The study describes the solutions to the risks that children face from Generative AI, including those targeting a Generative AI model's development, release and maintenance stages, along with how children and parents can be better educated to tackle these risks.

  • Legislative gaps

    EU and UK legislation only partially tackles these risks. While both jurisdictions ban or plan to ban Generative AI models designed to produce CSAM and to nudify images, many other legal provisions only partially target Generative AI, or don’t at all. Beyond this, gaps exist across all stages of Generative AI development, and teachers, parents and children alike are ill-equipped to face the resulting risks.

The result of this commission was a new report published by the NSPCC titled: Generative AI and Child Safety What are the risks and how can we solve them?

The research informed debates in the UK Parliament on the regulation of Generative AI. It was presented at at a Roundtable on Emerging Generative-AI Risks to Children and at the All-Parliamentary Group on Domestic Violence and Abuse. The research also informed the creation of NSPCC resources to help parents understand and speak about Generative AI risks with their children.

You can read the full analysis here. Please contact nick.botton@awo.agency if you would like to discuss it.

The analysis was authored by Nick Botton, Esme Harrington and Mathias Vermeulen of AWO, with a foreword by Toni Brunton-Douglas and Lewis Keller, and contributions and editing by Eleni Romanou (NSPCC).

References

  • “AWO’s research and analysis was invaluable to this report, which stands at the intersection of two complex issues: Generative AI and child safety. AWO demonstrated a great degree of subject-matter expertise in both areas, combined with expert legal analysis on the applicability of UK and EU legislation to the issues at hand. They took the kind of multi-disciplinary approach that is fundamentally necessary to make Generative AI safer for children. The AWO team were also a joy to work with, both in project management and administrative terms.”

    Toni Brunton-Douglas

    NSPCC

Get in touch. Send an email or book a call directly with our specialists.