How Young People Feel About Generative AI

Our new research reveals young people have mixed perspectives on generative AI, but they agree now is the time to ensure an inclusive and responsible generative AI ecosystem.

Young girl's hands holding a phone.

Generative AI is being rapidly adopted across the world, bringing both known and currently unknown risks, including for privacy, equity, and accuracy. As generative AI becomes more integrated into daily life—including workplaces, schools, and social interactions—it's important to explore its impact on young people, particularly BIPOC and LGBTQ+ communities.

We conducted our new study, "Teen and Young Adult Perspectives on Generative AI: Patterns of Use, Excitements, and Concerns," in partnership with Hopelab and the Center for Digital Thriving at Harvard Graduate School of Education. We examined generative AI use by race and ethnicity, age, gender, and LGBTQ+ identity to explore in detail how different groups of youth perceive and interact with generative AI technologies. We also posed a few open-ended questions across two separate surveys to ask young people about their feelings on the future of AI: where it will be helpful, what concerns them, and what they think adults need to know about how teenagers are using AI.

Right now, while most youth have heard of it, generative AI use among young people isn't commonplace. Just over half (51%) of young people have used generative AI tools, and only 4% say they use them daily. But they do have feelings—often quite mixed—about the future of the technology and what it means for their lives.

  • Many young people already use generative AI, and believe in its potential to change the world. But they have concerns as well.

    For the teens and young adults in our study, 41% believe that generative AI will impact their lives both positively and negatively in the next 10 years. In qualitative interviews, they told us "the world is changing," and "AI is the future."

    They're excited about how it could change the way they access information to assist with school, work, and the broader ecosystem of their lives. They're also excited about its impact on creativity and opportunities for human advancement.

    But on the negative side, we heard "AI worries me" and "AI is very creepy." Young people shared fears about dwindling job prospects and concerns about the theft of intellectual property and personal information. They're also aware of and concerned about the harms of generative AI that stem from the potential to use it to create and spread misinformation and disinformation. And they describe how AI is used "as a bullying tactic online—when creating AI-generated voices and images."

    There are hints here of other uses of generative AI and future challenges. Teens age 13–17 explained how generative AI is being used for companionship and comfort, "to pretend they have someone to talk to, or to pretend they're talking to their favorite fictional character," and to "find answers to questions teens are scared to ask their parents." Another teen explained, "We use AI because we are lonely and also because real people are mean and judging sometimes and AI isn't." It's also being used for sexual exploration. Teens describe peers using generative AI "in sexual ways" and "for porn."

  • Generative AI opinions and experiences vary significantly depending on gender, sexual identity, and racial-ethnic groups.

    Teen interactions and perceptions are not monolithic. Our study revealed diverse user needs and preferences. For example, among those who have ever used generative AI, Black young people are significantly more likely than their White peers to turn to it for a variety of reasons, including to get information (72% vs. 41%), brainstorm ideas (68% vs. 42%), and help with schoolwork (62% vs. 40%).

    Latinx young people who have ever used generative AI are more likely than White young people to use it for making pictures or images (39% vs. 24%), making sounds or music (27% vs. 7%), and getting help with their job (24% vs. 10%).

    LGBTQ+ young people who have never used generative AI are more likely than their cisgender and straight peers to say that they didn't use those tools due to concerns about inaccuracy and bias in the information provided (34% vs. 14%).

We still have time to center the well-being of young people in the future of generative AI.

Similar to what happened with social media, generative AI is advancing fast. It's important that educators, tech companies, policymakers, and caring adults pay attention now to how young people are using generative AI and the impact it's already beginning to have on them. Parents and educators can use these findings to help guide better practice and policy decisions and foster informed dialogue, and to spur their own learning about the uses and capabilities of these tools.

There is a critical need for digital platforms to center safety, reliability, and transparency when they develop experiences like generative search, integrate generative AI into existing features, and imagine other applications of AI for information gathering. It's more than the technology—it's important to shape AI from a human-centered perspective, and to ensure that young people aren't forgotten when building new tools. We want to ensure that AI safely meets the needs of young users while anticipating potential unintended consequences for their well-being and development.

At Common Sense, we will continue to offer guidance to educators and families with our AI product reviews and our research and industry guidance, as well as future research.

By leveraging the collective wisdom and insights of communities invested in AI across generations, we can create a more inclusive and responsible generative AI ecosystem that anticipates and mitigates risk to young people while also maximizing benefits and growth.

The writers for the full report include Amy Green, Beck Tench, and Emily Weinstein, with support from the teams at Hopelab (Emma Bruehlman-Senecal, Mike Parent, and Jayla Stokesberry), Common Sense Media (Amanda Lenhart, Angela Calvin, Alexa Hasse, and Mary Madden), and the Center for Digital Thriving (Eduardo Lara and Carrie James).

Amanda Lenhart

Amanda Lenhart leads research efforts at Common Sense Media. She has spent her career studying how technology affects human lives, with a special focus on families and children. Most recently, as the program director for Health and Data at Data & Society Research Institute, Amanda investigated how social media platforms design for the digital well-being of youth. She began her career at the Pew Research Center, pioneering the Center’s work studying how teens and families use social and mobile technologies.