Skip to main content

Common Sense Media

Movie & TV reviews for parents

Use app
  • For Parents
  • For Educators
  • Our Work and Impact
Language:
Español (próximamente) - volver al inicio

Or browse by category:

  • Movies
  • TV Shows
  • Books
  • Apps
  • Games
  • Parenting
  • Sign in
  • Donate
  • Get the app
  • Movies
    • Movie Reviews and Lists
      • Movie Reviews
      • Best Movie Lists
      • Best Movies on Netflix, Disney+, and More
      • Common Sense Selections for Movies
    • Marketing Campaign
      • 50 Modern Movies All Kids Should Watch Before They're 12

    • The Common Sense Seal
      • Common Sense Selections for Movies

  • TV
    • TV Reviews and Lists
      • TV Reviews
      • Best TV Lists
      • Best TV Shows on Netflix, Disney+, and More
      • Common Sense Selections for TV
      • Video Reviews of TV Shows
    • Marketing Campaign
      • Best Kids' Shows on Disney+

    • Marketing Campaign
      • Best Kids' TV Shows on Netflix

  • Books
    • Book Reviews and Lists
      • Book Reviews
      • Best Book Lists
      • Common Sense Selections for Books
    • Article About Books
      • 8 Tips for Getting Kids Hooked on Books

    • Marketing Campaign for Books
      • 50 Books All Kids Should Read Before They're 12

  • Games
    • Game Reviews and Lists
      • Game Reviews
      • Best Game Lists
      • Common Sense Selections for Games
      • Video Reviews of Games
    • Marketing Campaign
      • Nintendo Switch Games for Family Fun

    • Marketing Campaign
      • Common Sense Selections for Games

  • Podcasts
    • Podcast Reviews and Lists
      • Podcast Reviews
      • Best Podcast Lists
      • Common Sense Selections for Podcasts
    • Podcast Article Callout
      • Parents' Guide to Podcasts

    • Marketing Campaign
      • Common Sense Selections for Podcasts

  • Apps
    • App Reviews and Lists
      • App Reviews
      • Best App Lists
    • Marketing Campaign
      • Social Networking for Teens

    • Marketing Campaign
      • Gun-Free Action Game Apps

    • Marketing Campaign
      • Reviews for AI Apps and Tools

  • YouTube
    • YouTube Reviews and Lists
      • YouTube Channel Reviews
      • YouTube Kids Channels by Topic
    • Marketing Campaign
      • Parents' Ultimate Guide to YouTube Kids

    • Marketing Campaign
      • YouTube Kids Channels for Gamers

  • Parent Tips and FAQs
    • By Age
      • Preschoolers (2-4)
      • Little Kids (5-7)
      • Big Kids (8-9)
      • Pre-Teens (10-12)
      • Teens (13+)
    • By Topic
      • Screen Time
      • Learning
      • Social Media
      • Cellphones
      • Online Safety
      • Identity and Community
      • More ...
    • By Platform
      • TikTok
      • Snapchat
      • Minecraft
      • Roblox
      • Fortnite
      • Discord
      • More ...
    • What's New
      • How to Share Screen Time Rules with Relatives, Babysitters, and Other Caregivers

      • Family Tech Planners
      • Digital Skills
      • All Articles
  • Celebrating Community
    • Menu for Latino Content
      • Latino Culture
      • Black Voices
      • Asian Stories
      • Native Narratives
      • LGBTQ+ Pride
      • Best of Diverse Representation List
    • FACE English Column 2
      • Multicultural Books

    • FACE English Column 3
      • YouTube Channels with Diverse Representations

    • FACE English Column 4
      • Podcasts with Diverse Characters and Stories

  • Donate

 

ChatGPT Logo

ChatGPT

By our AI Review Team .
Last updated October 13, 2023

A powerful, at times risky chatbot for people 13+ that is best used for creativity, not facts

Overall Rating

Learn more

AI Type

Multi-Use

Learn more

Privacy Rating

48%

Learn more


 

What is it?

ChatGPT, which stands for Chat Generative Pre-Trained Transformer (say that 10 times fast…) is a generative AI chatbot that generates text in response to a wide range of prompts or questions. For example, it can respond to a user in a way that feels like a conversation, or come up with an outline for an essay on the history of television.

ChatGPT was created by the company OpenAI and launched for the public in November 2022. ChatGPT is on its fourth-generation LLM, otherwise known as GPT-4. The free version of ChatGPT currently uses an earlier version (GPT-3.5). To access the more powerful GPT-4, users must upgrade to Plus, which costs $20/month. Users can access ChatGPT on Android and iPhone mobile apps, through a desktop web browser (including Google Chrome, Mozilla Firefox, Safari, Microsoft Edge, or Opera) and through a mobile web browser.

How it works

ChatGPT is a form of generative AI, which is an emerging field of artificial intelligence. Generative AI is defined by the ability of an AI system to create ("generate") content that is complex and coherent and original. For example, a generative AI model can create sophisticated writing or images. ChatGPT is a chatbot interface that essentially sits on top of a large language model (LLM). This underlying system is what makes ChatGPT so powerful and able to respond to many kinds of human input.

Large language models are sophisticated computer programs that are designed to generate human-like text. Essentially, when a human user inputs a prompt or question, an LLM quickly analyzes patterns from its training data to guess which words are most likely to come next. For example, when a user inputs "It was a dark and stormy," an LLM is very likely to generate the word "night" but not "algebra." LLMs are able to generate responses to a wide range of questions and prompts because they are trained on massive amounts of information scraped from the internet. In other words, a chatbot powered by an LLM is able to generate responses for many kinds of requests and topics because the LLM has likely seen things like that before. Importantly, LLMs cannot reason, think, feel, or problem-solve, and do not have an inherent sense of right, wrong, or truth.

Highlights

  • It's best for fiction and creativity. While this is an oversimplification, you can think of ChatGPT like a giant auto-complete system—it is simply predicting the words that will most likely come next. An LLM has been trained on a massive amount of text, so that "auto-complete" has a lot to work with. When a generative AI chatbot is factually correct, that's because those responses are generated from accurate information commonly found on the internet. Because of the above, and just like with all generative AI chatbots, ChatGPT performs best with fiction, not facts. It can be fun for creative use cases, but should not be relied on for anything that depends on factual accuracy.
  • It's up to date. Paid Plus and Enterprise users can use ChatGPT's browsing feature to access the internet in real time. This feature is not yet available to free users, and generative AI chatbots like ChatGPT should not be used as a search tool.

Harms and Ethical Risks

  • LLMs can, and do, create harms, and use of them is inherently risky. ChatGPT can be an amazing tool when used responsibly. Knowing why it is so risky can help determine how best to use it. This starts with ChatGPT's training data. Any text that can be scraped from the internet could be included in this model. While the details on which corners of the internet have been scraped are unclear, OpenAI has shared that GPT-4 was developed using data that is publicly available on the internet, information that OpenAI licenses from third parties, and data provided by human trainers and user inputs. OpenAI also shares that it filtered this pre-training data to reduce "inappropriate erotic text content." But the internet also includes a vast range of racist and sexist writing, conspiracy theories, misinformation and disinformation, toxic language, insults, and stereotypes about other people. As it predicts words, a generative AI chatbot can repeat this language unless a company stops it from doing so. Importantly, these attempts to limit objectionable material are like Band-Aids: They don't address the root causes, they don't change the underlying training data, and they can only limit harmful content that's already known. We don't know what they don't cover until it surfaces, and there are no standard requirements for what they do cover. And like bandages, they aren't comprehensive and are easily breakable.
  • ChatGPT's false information can shape our worldview. ChatGPT can generate or enable false information in a few ways: from "hallucinations"—an informal term used to describe the false content or claims that are often output by generative AI tools; by reproducing misinformation and disinformation; and by reinforcing unfair biases. Because OpenAI's attempts to limit these are brittle, false information is being generated at an alarming speed. As these AI systems grow, it may become increasingly difficult to separate fact from fiction. ChatGPT also adds users' inputs to its already skewed training data. While this helps ChatGPT improve, it also likely increases those skews. This is because today's ChatGPT users are an early-adopter subset of the internet-connected population, which as a whole overrepresents people in wealthier nations, as well as views from people who are wealthier, younger, and male. Combined, these forces carry an even greater risk—one that OpenAI expects to happen if not enough is done to address it—of AI systems to "reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement." We need much stronger oversight and governance of AI to prevent this from happening.

Review team note: We cannot address the full scope of the risks of ChatGPT that OpenAI has publicly discussed. That is not a reflection on whether those risks matter.

Limitations

  • Because ChatGPT isn't factually accurate by design, it can and does get things wrong. In OpenAI's own words, GPT-4 has a "tendency to make up facts, to double-down on incorrect information, and to perform tasks incorrectly." For example, when tested against a benchmark that is widely used to assess a model's factual accuracy, GPT-4's responses were 60% accurate and GPT-3.5's responses were ~48% accurate. Unfortunately, ChatGPT's inaccuracies can be hard to detect, as the model's responses can sound correct, even if they aren't. Importantly, OpenAI found that GPT-4 actually generates responses "in ways that are more convincing and believable than earlier GPT models." Any seemingly factual output needs to be checked—and this absolutely goes for any links, references, or citations too.
  • Parental permission is required, but this isn't obvious. Educators who are using ChatGPT in their classrooms need to know that children must be age 13, and anyone under 18 must have a parent's or legal guardian's permission to use ChatGPT. OpenAI does not check if permission is in place.
  • ChatGPT performs best in English.

Misuses

  • OpenAI details misuses of all of its models, including ChatGPT, in a comprehensive Usage Policy.
  • OpenAI's terms of service do not allow its use for children under age 13.
  • Teens aged 13–17 are required to have parental permission to use it.

 

Common Sense AI Principles Assessment

Our assessment of how well this product aligns with each AI Principle .

Usage Policies</a> do not permit uses that harm human rights, children's rights, identity, integrity, and human dignity. Although the specifics of enforcement remain unclear, OpenAI retains the right to use your inputs or personal information to safeguard these policies.</li> </ul> <p>&nbsp;</p> <h3>Important limitations and considerations</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">ChatGPT generates text that often <em>sounds</em> correct, even when it isn't. This makes it very easy for users to be overconfident in ChatGPT's responses.</li> <li style="line-height:1.5;margin-bottom:5px;">When users rely on ChatGPT's responses, this can have the effect of reducing human agency and oversight. The impact of this can be very harmful, depending on what the topic and responses are.</li> <li style="line-height:1.5;margin-bottom:5px;">At times, ChatGPT's responses can include expressions of humility and/or uncertainty. While this can help <em>generally</em> flag for users that responses aren't always correct, unfortunately, these expressions aren't an indication of a response's accuracy.</li> </ul> ">
See our complete ChatGPT review for Educators</a></p> ">
OpenAI agrees</a>. What is much harder to answer is the specific ways, affected groups, and the circumstances when unfair bias can happen.</li> <li style="line-height:1.5;margin-bottom:5px;">ChatGPT is powered by a massive large language model (LLM). While OpenAI no longer shares information about the training data used to build it, it includes text that is publicly available on the internet. This data is more likely to represent the internet-connected population, which in turn means it overrepresents people in wealthier nations, as well as views from people who are wealthier, younger, and male. In other words, the <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://dl.acm.org/doi/pdf/10.1145/3442188.3445922">quantity of training data does not guarantee its <em>diversity</em></a>.</li> <li style="line-height:1.5;margin-bottom:5px;">ChatGPT can generate harmful content. This can appear in the form of repeated reinforcement of harmful stereotypes and unfair biases, or it can have a <a href=https://www.commonsensemedia.org/ai-ratings/"https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/">huge impact on individual people</a>.</li> <li style="line-height:1.5;margin-bottom:5px;"><a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://cdn.openai.com/papers/gpt-4-system-card.pdf">OpenAI notes</a> that ChatGPT can "reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups." It is critical to assess all output from ChatGPT for unfair bias and risk of harm.</li> </ul> ">
participate in adversarial testing, often called "red teaming."</li> </ul> <p>&nbsp;</p> <h3>Important limitations and considerations</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">It is helpful here to think about ChatGPT as a giant auto-complete system. It isn't looking through text to find the best answers—it is designed to predict the words that are most likely to come next in response to a prompt. These responses, including any <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.sciencedirect.com/science/article/abs/pii/S0165178123002846">citations, can be right, or they can be completely wrong. The only way to know is to fact-check.</li> <li style="line-height:1.5;margin-bottom:5px;">As with all LLMs, ChatGPT can generate or enable false information from "hallucinations"—an informal term used to describe the false content or claims that are often output by generative AI tools, misinformation / disinformation, and by reinforcing unfair biases. Because OpenAI's attempts to limit these are brittle, false information is being generated at an alarming speed. As these AI systems grow, it may become increasingly difficult to separate fact from fiction. ChatGPT also adds users' inputs to its already skewed training data. While this helps ChatGPT get better, it also likely increases those skews. This is because today's ChatGPT users are an early-adopter subset of the internet-connected population, which as a whole overrepresents people in wealthier nations, as well as views from people who are wealthier, younger, and male. Combined, these forces carry an even greater risk–one which <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://cdn.openai.com/papers/gpt-4-system-card.pdf">OpenAI expects to happen</a> if there is not enough done to address it–of AI systems to "reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement." We need much stronger oversight and governance of AI to prevent this from happening.</li> </ul> ">
Children's Online Privacy and Protection Act (COPPA)</a>, the <a href=https://www.commonsensemedia.org/ai-ratings/"https://www.commonsensemedia.org/kids-action/about-us/our-issues/digital-life/sopipa">Student Online Personal Information Protection Act (SOPIPA)</a>, or the <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www2.ed.gov/policy/gen/guid/fpco/ferpa/index.html">Family Educational Rights and Privacy Act (FERPA)</a>. ChatGPT is compliant with the <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://gdpr.eu/">General Data Protection Regulation (GDPR)</a> and the <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://oag.ca.gov/privacy/ccpa">California Consumer Privacy Act (CCPA)</a>.</li> </ul> <p><em>This review is distinct from Common Sense's privacy </em><a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://privacy.commonsense.org/resource/evaluation-process">evaluations and </em><a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://privacy.commonsense.org/resource/privacy-ratings">ratings, which evaluate privacy policies to help parents and educators make sense of the complex policies and terms related to popular tools used in homes and classrooms across the country.</em></p> ">
ChatGPT invented a sexual harassment scandal and named a real law prof as the accused</a>. <em>Washington Post</em>.&nbsp;<br>- Sands, L. (2023, April 6). <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.washingtonpost.com/technology/2023/04/06/chatgpt-australia-mayor-lawsuit-lies/">ChatGPT falsely told voters their mayor was jailed for bribery. He may sue</a>. <em>Washington Post</em>.&nbsp;<br>- Verma, P. (2023, May 18). <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.washingtonpost.com/technology/2023/05/18/texas-professor-threatened-fail-class-chatgpt-cheating/">A professor accused his class of using ChatGPT, putting diplomas in jeopardy</a>. <em>Washington Post</em>.&nbsp;<br>- Weiser, B. (2023, May 27). <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html">Here's what happens when your lawyer uses ChatGPT</a>. <em>New York Times</em>.</li> </ul> <p>&nbsp;</p> <h3>Important limitations and considerations</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">At the time of this review, there are no moderation tools for parents or educators.</li> <li style="line-height:1.5;margin-bottom:5px;">The LLMs used to power chatbots like ChatGPT require massive amounts of text in order to generate responses for a wide range of prompts. This means that a large part of any LLM's training data comes from what is publicly available online. ChatGPT was developed with three primary sources of text data: 1) information that is publicly available on the internet, 2) information that OpenAI licenses from third parties, and 3) information provided by human trainers and users of the tool (yes, this includes your conversations, <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://help.openai.com/en/articles/7730893-data-controls-faq">unless you turn this setting off</a>). Beyond this, OpenAI no longer shares details about the specific data sets that it uses to train ChatGPT. This is unfortunate, as it can be very difficult for validated researchers to independently assess fairness, what data may or may not be copyrighted, and whether any personally identifiable information (PII) is included.</li> </ul> <p>&nbsp;</p> <h3>Review team note:</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">While OpenAI engages in transparency reporting, it is highly technical in nature. For those interested:&nbsp;<br>- GPT-4 research (including a <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://cdn.openai.com/papers/gpt-4-system-card.pdf">system card</a>, which is a type of transparency reporting) <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://openai.com/research/gpt-4">can be found here</a>.&nbsp;<br>- OpenAI publishes a large amount of technical <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://openai.com/research/overview">AI research</a>.</li> </ul> ">
  • People First

    some

    AI should Put People First. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • OpenAI's Usage Policies do not permit uses that harm human rights, children's rights, identity, integrity, and human dignity. Although the specifics of enforcement remain unclear, OpenAI retains the right to use your inputs or personal information to safeguard these policies.

     

    Important limitations and considerations

    • ChatGPT generates text that often sounds correct, even when it isn't. This makes it very easy for users to be overconfident in ChatGPT's responses.
    • When users rely on ChatGPT's responses, this can have the effect of reducing human agency and oversight. The impact of this can be very harmful, depending on what the topic and responses are.
    • At times, ChatGPT's responses can include expressions of humility and/or uncertainty. While this can help generally flag for users that responses aren't always correct, unfortunately, these expressions aren't an indication of a response's accuracy.
  • Learning

    some

    AI should Promote Learning. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • While ChatGPT is not designed for use in schools, there are many ways that educators can use it in their classrooms.
    • Because it is a multi-use product, educators and older students—with permission from a parent or legal guardian—can explore a wide range of topic areas.
    • ChatGPT can be great for creative use cases, but should not be relied on for anything that depends on factual accuracy.

     

    Important limitations and considerations

    • ChatGPT does not support a personalized learning path or align with content standards.
    • Students could use ChatGPT to skip important aspects of the learning process, such as query, discovery, and productive struggle. Over time, this can harm creativity, communication, and critical thinking capabilities.
    • While ChatGPT can create many learning opportunities, the burden is on the user to discover how to get the most out of it.
    • Any seemingly factual output needs to be checked—and this absolutely goes for any links, references, or citations, too.
    • "AI detectors" are extremely unreliable. They can miss when something has been generated by AI, but can also be wrong and flag content as AI-generated when it was not. If students are then wrongly accused of cheating, they are often left without any way to prove they did not cheat. This is a risk for any text-based generative AI product.

    See our complete ChatGPT review for Educators

  • Fairness

    some

    AI should Prioritize Fairness. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • Compared to earlier versions of ChatGPT, OpenAI has made it harder for the current model to generate harmful content.
    • Our own analysis, which used a number of different test data sets, shows that OpenAI has done a good job of this, at least against those specific data sets. But this is no guarantee, and the harms here can take many forms (see below).
    • ChatGPT can be used to teach teens about unfair bias and responsible use of technology by having them assess its responses for harmful content.

     

    Important limitations and considerations

    • The answer to the question "Is ChatGPT biased?" is yes. And don't just take our word for it—OpenAI agrees. What is much harder to answer is the specific ways, affected groups, and the circumstances when unfair bias can happen.
    • ChatGPT is powered by a massive large language model (LLM). While OpenAI no longer shares information about the training data used to build it, it includes text that is publicly available on the internet. This data is more likely to represent the internet-connected population, which in turn means it overrepresents people in wealthier nations, as well as views from people who are wealthier, younger, and male. In other words, the quantity of training data does not guarantee its diversity.
    • ChatGPT can generate harmful content. This can appear in the form of repeated reinforcement of harmful stereotypes and unfair biases, or it can have a huge impact on individual people.
    • OpenAI notes that ChatGPT can "reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups." It is critical to assess all output from ChatGPT for unfair bias and risk of harm.
  • Social Connection

    some

    AI should Help People Connect. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • ChatGPT has the ability to help people connect indirectly, but that depends on how it is used. It can, for example, help groups brainstorm, create conversation starters, co-create stories, provide communication assistance across languages, or become a part of collaborative group projects.

     

    Important limitations and considerations

    • Impressionable users could develop a parasocial relationship with the chatbot, believing it to be a genuine companion.
    • ChatGPT doesn't currently help people connect through features in its platform. It requires the humans using it to make this happen.
    • Because ChatGPT is biased and dialogue-based, it can reinforce harmful stereotypes or beliefs over time.
    • It can become easy to rely on using ChatGPT, potentially creating a dependence on the tool.
    • OpenAI has taken a number of important steps to reduce ChatGPT's ability to generate harmful, hateful, or dehumanizing content. No protections are perfect, however, and any use of generative AI is inherently risky.
  • Trust

    a little

    AI should Be Trustworthy. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • The OpenAI team embraces peer reviews and invites outside parties to provide feedback and participate in adversarial testing, often called "red teaming."

     

    Important limitations and considerations

    • It is helpful here to think about ChatGPT as a giant auto-complete system. It isn't looking through text to find the best answers—it is designed to predict the words that are most likely to come next in response to a prompt. These responses, including any citations, can be right, or they can be completely wrong. The only way to know is to fact-check.
    • As with all LLMs, ChatGPT can generate or enable false information from "hallucinations"—an informal term used to describe the false content or claims that are often output by generative AI tools, misinformation / disinformation, and by reinforcing unfair biases. Because OpenAI's attempts to limit these are brittle, false information is being generated at an alarming speed. As these AI systems grow, it may become increasingly difficult to separate fact from fiction. ChatGPT also adds users' inputs to its already skewed training data. While this helps ChatGPT get better, it also likely increases those skews. This is because today's ChatGPT users are an early-adopter subset of the internet-connected population, which as a whole overrepresents people in wealthier nations, as well as views from people who are wealthier, younger, and male. Combined, these forces carry an even greater risk–one which OpenAI expects to happen if there is not enough done to address it–of AI systems to "reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement." We need much stronger oversight and governance of AI to prevent this from happening.
  • Data Use

    a little

    AI should Protect Our Privacy. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • ChatGPT's terms of service do not allow its use for children under age 13.
    • Teens age 13–17 are required to have parental permission to use it.

     

    Violates this AI Principle

    • By default, ChatGPT uses the prompts you input and the conversations you have with it to further train its models. In other words, anything you say to the chatbot—including personal information—will become part of its training data.
    • The default use of conversation data is especially worrying for kids and teens who are using ChatGPT, even if they are not supposed to.

     

    Important limitations and considerations

    • You can stop ChatGPT from using your data, but this option isn't easy to find. If you want to do this, you can begin the process here.
    • While the ChatGPT sign-up process has an age gate, there is nothing to stop kids from signing up if they choose to give an incorrect birth date.
    • It is unclear how aware teachers are of the parental permission requirement for 13- to 17-year-olds, and at the time of this review, the tool does not ask whether permission has been granted.
    • Because of its age policy, ChatGPT is not required to comply with (and to our knowledge, does not comply with) important protections such as the Children's Online Privacy and Protection Act (COPPA), the Student Online Personal Information Protection Act (SOPIPA), or the Family Educational Rights and Privacy Act (FERPA). ChatGPT is compliant with the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

    This review is distinct from Common Sense's privacy evaluations and ratings, which evaluate privacy policies to help parents and educators make sense of the complex policies and terms related to popular tools used in homes and classrooms across the country.

  • Kids' Safety

    a little

    AI should Keep Kids & Teens Safe. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • Compared to earlier versions of ChatGPT, OpenAI has made it harder for the model to generate harmful content.
    • Our own analysis, which used a number of different test data sets, shows that OpenAI has done a good job of this, at least against those specific data sets.

     

    Violates this AI Principle

    • Because ChatGPT is not supposed to be used by kids under age 13, and only with permission for anyone under 18, there are no specific protections that we know of for kids and teens.

     

    Important limitations and considerations

    • Any protections that kids and teens experience are the general protections for ChatGPT users. While these cover a lot of the most objectionable content, this is not a tool that can be considered widely safe for kids and teens.
  • Transparency & Accountability

    very little

    AI should Be Transparent & Accountable. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • ChatGPT has a thumbs-up/thumbs-down feedback mechanism for every response, which can be used to flag whether a response is harmful, unsafe, untrue, or not helpful.

     

    Violates this AI Principle

    • Especially because ChatGPT can generate responses that sound correct and authoritative but are not, it can be easy for users to take this information at face value. When this output has a direct and significant impact on the world, it can cause serious harm and with little accountability. Here are some examples: 
      - Verma, P., & Oremus, W. (2023, April 5). ChatGPT invented a sexual harassment scandal and named a real law prof as the accused. Washington Post. 
      - Sands, L. (2023, April 6). ChatGPT falsely told voters their mayor was jailed for bribery. He may sue. Washington Post. 
      - Verma, P. (2023, May 18). A professor accused his class of using ChatGPT, putting diplomas in jeopardy. Washington Post. 
      - Weiser, B. (2023, May 27). Here's what happens when your lawyer uses ChatGPT. New York Times.

     

    Important limitations and considerations

    • At the time of this review, there are no moderation tools for parents or educators.
    • The LLMs used to power chatbots like ChatGPT require massive amounts of text in order to generate responses for a wide range of prompts. This means that a large part of any LLM's training data comes from what is publicly available online. ChatGPT was developed with three primary sources of text data: 1) information that is publicly available on the internet, 2) information that OpenAI licenses from third parties, and 3) information provided by human trainers and users of the tool (yes, this includes your conversations, unless you turn this setting off). Beyond this, OpenAI no longer shares details about the specific data sets that it uses to train ChatGPT. This is unfortunate, as it can be very difficult for validated researchers to independently assess fairness, what data may or may not be copyrighted, and whether any personally identifiable information (PII) is included.

     

    Review team note:

    • While OpenAI engages in transparency reporting, it is highly technical in nature. For those interested: 
      - GPT-4 research (including a system card, which is a type of transparency reporting) can be found here. 
      - OpenAI publishes a large amount of technical AI research.


 

 

Additional Resources

Video+

Guide to ChatGPT for parents and caregivers

For Families

Helping kids navigate the world of artificial intelligence

Education

Free Educator resources to explore and use ChatGPT and AI

 

 

See All AI Reviews

See Next Review

 

Common Sense is the nation's leading nonprofit organization dedicated to improving the lives of all kids and families by providing the trustworthy information, education, and independent voice they need to thrive in the 21st century.

We're a nonprofit. Support our work

  • About
    • Column 1
      • Our Work and Impact
      • How We Work
      • Diversity & Inclusion
      • Meet Our Team
      • Board of Directors
      • Board of Advisors
      • Our Partners
      • Our Offices
      • Press Room
      • Annual Report
      • Contact Us
  • Learn More
    • Column 1
      • Common Sense Media
      • Common Sense Education
      • Digital Citizenship Program
      • Family Engagement Program
      • Privacy Program
      • Research Program
      • Advocacy Program
  • Get Involved
    • Column 1
      • Donate
      • Join as a Parent
      • Join as an Educator
      • Join as an Advocate
      • Get Our Newsletters
      • Request a Speaker
      • Partner With Us
      • Events
      • Apply for Free Internet
      • We're Hiring

Follow Common Sense Media

  • Facebook
  • Twitter
  • Instagram
  • YouTube
  • LinkedIn
Contact us / Privacy / / Terms of use / Community guidelines
© Common Sense Media. All rights reserved. Common Sense and other associated names and logos are trademarks of Common Sense Media, a 501(c)(3) nonprofit organization (FEIN: 41-2024986).
Image with a screenshot of Aura Parental Controls and three KiwiCo activity crates

Membership has its perks

Annual members enjoy access to special offers for Aura Parental Controls and KiwiCo hands-on activity kits.

Join now