Giving Kids on Both Sides of the Atlantic a Seat at the AI Table
Our new global partnership with the leading UK children's organization will create opportunities for kids to inform the future of the digital world.
Artificial intelligence has found its way into nearly every aspect of our lives, from our homes to the classroom, and kids are interacting with AI in some way almost every day. In fact, according to our own research, 58% of students age 12–18 report they've used ChatGPT, and 50% have used it for school-related activities.
Organizations around the world are working to get ahead of the impact that AI may have on society and democracy in a way they did not when social media emerged on the scene almost two decades ago. But few of these organizations are surfacing the needs of kids at the decision table—in tech boardrooms, classrooms, and legislatures—to help determine how these technologies will shape the next generation of leaders, thinkers, and voters in the U.S., Europe, and globally.
What do kids and teens see as the most valuable benefits and urgent risks of generative AI? What role do they believe AI will play in their future—and what guardrails do kids envision? As developers build products for global audiences, how do the aspirations that kids have for AI chatbots and online platforms compare across the U.S., Europe, and beyond?
We're filling this gap.
Our new multi-year partnership with the UK's National Society for the Prevention of Cruelty to Children (NSPCC) unites two of the world's leading child advocacy organizations on each side of the Atlantic. It marries Common Sense's age-appropriate digital resources and the NSPCC's schools network, our respective U.K. and U.S. youth advisory boards, and the organizations' cross-Atlantic reach to build a global narrative around kids' experiences in the digital world—with their voices at the center.
Over the past year, our organizations have already surfaced the global need for responsible and safe AI that protects children's well-being, equips kids to learn, and centers their voice in its development. Here's what we've done so far.
Unpacking global evidence around AI risks for kids
Common Sense Media's first-of-its-kind AI ratings and review system showcases a number of opportunities, as well as notable risks, posed by AI products, including generative AI tools, for kids and teens who are already uniquely vulnerable to the impacts of technology.
For example, four products received a 1- or 2-star ratings out of 5 (DALL-E, My AI, Stable Diffusion, and Loora). Generative AI models in this ratings category host a wide variety of cultural, racial, socioeconomic, historical, and gender biases. These products can perpetuate harmful stereotypes, misinformation, disinformation, and deepfake images, demonstrating an increased likelihood of emotional, physical, and psychological harm.
Our findings are consistent with growing calls to the NSPCC's Childline in the U.K. for support in navigating the use of generative AI for harm and abuse, including to bully peers, generate fake sexual images, and perpetuate harmful misinformation.
Adapting digital and media literacy approaches to keep pace with new digital possibilities
As technology rapidly evolves, educators around the world are eager to ensure that student resources address the challenges students face in real time. In the lead-up to elections around the world, AI literacy, as well as news and media literacy, is even more important to prepare the next generation of digital citizens and address the potential harms of AI.
Global partnerships are integral to scale digital literacy through tailored, localized approaches. Our partnership with the NSPCC will model this needs-led approach, and aims to grow access to our award-winning digital literacy tools through the NSPCC's reach in 90% of U.K. primary schools.
Amplifying young people's perspectives through global convening
When it comes to media and technology, the power of youth perspectives is clear, and is made that much stronger through the combined voices of kids and teens on both sides of the Atlantic.
We piloted the first-ever Common Sense Media Youth Advisory Council in late 2022 with the goal of centering youth voices and insights in our global strategy and programs. Likewise, the NSPCC's Young People's Board for Change engages a select cohort of British youth in advocacy and awareness efforts around child safety.
Some examples of ways we have brought our respective youth councils to the table include Healthy Young Minds: Transatlantic Policy Solutions for a Digital World, a town hall featuring inspiring representatives from the NSPCC youth board and the lead U.K. tech regulator; hearing from U.K. and U.S. teens at our AI & kids roundtable on the margins of the U.K. Prime Minister's AI Safety summit; and remarks from our CEOs at BETT to highlight AI's impact on learning and child safety.
A shared vision for a healthy digital future
We look forward to bringing our shared vision to life: to equip educators, families, industry, and governments alike in fostering advanced technology that mitigates risks to kids and inspires future change-makers in the U.S., Europe, and globally.

Artificial intelligence has found its way into nearly every aspect of our lives, from our homes to the classroom, and kids are interacting with AI in some way almost every day. In fact, according to our own research, 58% of students age 12–18 report they've used ChatGPT, and 50% have used it for school-related activities.
Organizations around the world are working to get ahead of the impact that AI may have on society and democracy in a way they did not when social media emerged on the scene almost two decades ago. But few of these organizations are surfacing the needs of kids at the decision table—in tech boardrooms, classrooms, and legislatures—to help determine how these technologies will shape the next generation of leaders, thinkers, and voters in the U.S., Europe, and globally.
What do kids and teens see as the most valuable benefits and urgent risks of generative AI? What role do they believe AI will play in their future—and what guardrails do kids envision? As developers build products for global audiences, how do the aspirations that kids have for AI chatbots and online platforms compare across the U.S., Europe, and beyond?
We're filling this gap.
Our new multi-year partnership with the UK's National Society for the Prevention of Cruelty to Children (NSPCC) unites two of the world's leading child advocacy organizations on each side of the Atlantic. It marries Common Sense's age-appropriate digital resources and the NSPCC's schools network, our respective U.K. and U.S. youth advisory boards, and the organizations' cross-Atlantic reach to build a global narrative around kids' experiences in the digital world—with their voices at the center.
Over the past year, our organizations have already surfaced the global need for responsible and safe AI that protects children's well-being, equips kids to learn, and centers their voice in its development. Here's what we've done so far.
Unpacking global evidence around AI risks for kids
Common Sense Media's first-of-its-kind AI ratings and review system showcases a number of opportunities, as well as notable risks, posed by AI products, including generative AI tools, for kids and teens who are already uniquely vulnerable to the impacts of technology.
For example, four products received a 1- or 2-star ratings out of 5 (DALL-E, My AI, Stable Diffusion, and Loora). Generative AI models in this ratings category host a wide variety of cultural, racial, socioeconomic, historical, and gender biases. These products can perpetuate harmful stereotypes, misinformation, disinformation, and deepfake images, demonstrating an increased likelihood of emotional, physical, and psychological harm.
Our findings are consistent with growing calls to the NSPCC's Childline in the U.K. for support in navigating the use of generative AI for harm and abuse, including to bully peers, generate fake sexual images, and perpetuate harmful misinformation.
Adapting digital and media literacy approaches to keep pace with new digital possibilities
As technology rapidly evolves, educators around the world are eager to ensure that student resources address the challenges students face in real time. In the lead-up to elections around the world, AI literacy, as well as news and media literacy, is even more important to prepare the next generation of digital citizens and address the potential harms of AI.
Global partnerships are integral to scale digital literacy through tailored, localized approaches. Our partnership with the NSPCC will model this needs-led approach, and aims to grow access to our award-winning digital literacy tools through the NSPCC's reach in 90% of U.K. primary schools.
Amplifying young people's perspectives through global convening
When it comes to media and technology, the power of youth perspectives is clear, and is made that much stronger through the combined voices of kids and teens on both sides of the Atlantic.
We piloted the first-ever Common Sense Media Youth Advisory Council in late 2022 with the goal of centering youth voices and insights in our global strategy and programs. Likewise, the NSPCC's Young People's Board for Change engages a select cohort of British youth in advocacy and awareness efforts around child safety.
Some examples of ways we have brought our respective youth councils to the table include Healthy Young Minds: Transatlantic Policy Solutions for a Digital World, a town hall featuring inspiring representatives from the NSPCC youth board and the lead U.K. tech regulator; hearing from U.K. and U.S. teens at our AI & kids roundtable on the margins of the U.K. Prime Minister's AI Safety summit; and remarks from our CEOs at BETT to highlight AI's impact on learning and child safety.
A shared vision for a healthy digital future
We look forward to bringing our shared vision to life: to equip educators, families, industry, and governments alike in fostering advanced technology that mitigates risks to kids and inspires future change-makers in the U.S., Europe, and globally.

Sara Egozi is the Head of Global Strategy & Policy at Common Sense Media. In this role, Sara leads the organization's international programs to foster responsible tech for children in the digital world.