Skip to main content

Common Sense Media

Movie & TV reviews for parents

Use app
  • For Parents
  • For Educators
  • Our Work and Impact
Language:
Español (próximamente) - volver al inicio

Or browse by category:

  • Movies
  • TV Shows
  • Books
  • Apps
  • Games
  • Parenting
  • Sign in
  • Donate
  • Get the app
  • Movies
    • Movie Reviews and Lists
      • Movie Reviews
      • Best Movie Lists
      • Best Movies on Netflix, Disney+, and More
      • Common Sense Selections for Movies
    • Marketing Campaign
      • 50 Modern Movies All Kids Should Watch Before They're 12

    • The Common Sense Seal
      • Common Sense Selections for Movies

  • TV
    • TV Reviews and Lists
      • TV Reviews
      • Best TV Lists
      • Best TV Shows on Netflix, Disney+, and More
      • Common Sense Selections for TV
      • Video Reviews of TV Shows
    • Marketing Campaign
      • Best Kids' Shows on Disney+

    • Marketing Campaign
      • Best Kids' TV Shows on Netflix

  • Books
    • Book Reviews and Lists
      • Book Reviews
      • Best Book Lists
      • Common Sense Selections for Books
    • Article About Books
      • 8 Tips for Getting Kids Hooked on Books

    • Marketing Campaign for Books
      • 50 Books All Kids Should Read Before They're 12

  • Games
    • Game Reviews and Lists
      • Game Reviews
      • Best Game Lists
      • Common Sense Selections for Games
      • Video Reviews of Games
    • Marketing Campaign
      • Nintendo Switch Games for Family Fun

    • Marketing Campaign
      • Common Sense Selections for Games

  • Podcasts
    • Podcast Reviews and Lists
      • Podcast Reviews
      • Best Podcast Lists
      • Common Sense Selections for Podcasts
    • Podcast Article Callout
      • Parents' Guide to Podcasts

    • Marketing Campaign
      • Common Sense Selections for Podcasts

  • Apps
    • App Reviews and Lists
      • App Reviews
      • Best App Lists
    • Marketing Campaign
      • Social Networking for Teens

    • Marketing Campaign
      • Gun-Free Action Game Apps

    • Marketing Campaign
      • Reviews for AI Apps and Tools

  • YouTube
    • YouTube Reviews and Lists
      • YouTube Channel Reviews
      • YouTube Kids Channels by Topic
    • Marketing Campaign
      • Parents' Ultimate Guide to YouTube Kids

    • Marketing Campaign
      • YouTube Kids Channels for Gamers

  • Parent Tips and FAQs
    • By Age
      • Preschoolers (2-4)
      • Little Kids (5-7)
      • Big Kids (8-9)
      • Pre-Teens (10-12)
      • Teens (13+)
    • By Topic
      • Screen Time
      • Learning
      • Social Media
      • Cellphones
      • Online Safety
      • Identity and Community
      • More ...
    • By Platform
      • TikTok
      • Snapchat
      • Minecraft
      • Roblox
      • Fortnite
      • Discord
      • More ...
    • What's New
      • How to Share Screen Time Rules with Relatives, Babysitters, and Other Caregivers

      • Family Tech Planners
      • Digital Skills
      • All Articles
  • Celebrating Community
    • Menu for Latino Content
      • Latino Culture
      • Black Voices
      • Asian Stories
      • Native Narratives
      • LGBTQ+ Pride
      • Best of Diverse Representation List
    • FACE English Column 2
      • Multicultural Books

    • FACE English Column 3
      • YouTube Channels with Diverse Representations

    • FACE English Column 4
      • Podcasts with Diverse Characters and Stories

  • Donate

 

Stable Diffusion Logo

Stable Diffusion

By our AI Review Team .
Last updated November 5, 2023

Powerful image generator can unleash creativity, but is wildly unsafe and perpetuates harm

Overall Rating

Learn more

AI Type

Multi-Use

Learn more

Privacy Rating

48%

Learn more


 

We do not consider this a safe tool, so we won't link directly to it in this review. Why this matters.

 

What is it?

Stable Diffusion is a generative AI product created by Stability AI. It can create realistic images and art from a text-based description that can combine concepts, attributes, and styles. Stability AI's full suite of image editing tools offers users a sophisticated range of options: extending generated images beyond the original frame (outpainting), making authentic modifications to existing user-uploaded or AI-generated pictures, and incorporating or eliminating components while considering shadows, reflections, and textures (inpainting). Once users achieve the generated image they want, they can download and use it.

Stability AI released Stable Diffusion to the public in November 2022. It is powered by a massive data set of image-text pairs scraped from the internet. The data set includes a subset of 2.32 billion images that contain English text. It was created by LAION, which stands for "Large-scale Artificial Intelligence Open Network." LAION is a nonprofit organization that is funded in part by Stability AI.

Stable Diffusion can be accessed in three separate places hosted by Stability AI:

  1. Clipdrop, Stability AI's text-to-image editor, which has three pricing tiers: free, pro ($9/month), and API pricing, in which users purchase credits that are used to pay for the computing cost of each request.
  2. Dreamstudio, another image editor from Stability AI that extends beyond text-to-image prompting with inpainting, outpainting, and image-to-image generation, which requires users to purchase credits that are used to pay for the computing cost of each request. Currently, $10 equals 1,000 credits, which Stability AI notes is ~5,000 images.
  3. Stability.ai's developer platform, which—like Dreamstudio—requires users to purchase credits that are used to pay for the computing cost of each request. Currently, $10 equals 1,000 credits, which Stability AI notes is ~5,000 images

In addition, Stable Diffusion has made all model weights and code available. Anyone is able to access, download, and use the full model.

How it works

Stable Diffusion is a form of generative AI, which is an emerging field of artificial intelligence. Generative AI is defined by the ability of an AI system to create ("generate") content that is complex and coherent and original. For example, a generative AI model can create sophisticated writing or images.

Stable Diffusion uses a particular type of generative AI called "diffusion models," named for the process of diffusion to generate new content. Diffusion is a natural phenomenon you've likely experienced before. A good example of diffusion happens if you drop some food coloring into a glass of water. No matter where that food coloring starts, eventually it will spread throughout the entire glass and color the water in a uniform way. In the case of computer pixels, random motion of those pixels will always lead to "TV static." That is the image equivalent of food coloring creating a uniform color in a glass of water. A machine-learning diffusion model works by, oddly enough, destroying its training data by successively adding "TV static," and then reversing this to generate something new. They are capable of generating high-quality images with fine details and realistic textures.

Stable Diffusion combines a diffusion model with a text-to-image model. A text-to-image model is a machine learning algorithm that uses natural language processing (NLP), a field of AI that allows computers to understand and process human language. Stable Diffusion takes in a natural language input and produces an image that attempts to match the description.

Highlights

  • Stable Diffusion has the potential to enable creativity and artistic expression, allow for visualization of new ideas, and create new concepts and campaigns.
  • Stability AI suggests that the best uses of Stable Diffusion include: generation of artworks and use in design and other artistic processes; applications in educational or creative tools; research on generative models; safe deployment of models that have the potential to generate harmful content; and probing and understanding the limitations and biases of generative models.

Harms and Ethical Risks

  • Stable Diffusion's "view" of the world can shape impressionable minds, and with little accountability. Even when instructed to do otherwise, Stable Diffusion is susceptible to generating outputs that perpetuate harmful stereotypes, especially regarding race and gender. We confirmed this repeatedly with our own testing. These behaviors reflect both the way in which the model was trained and—critically—the choice of the data set used to train it. LAION 5B, the data set that powers Stable Diffusion, is uncurated. This means that it contains every image found in the Common Crawl repository that has one or more text labels that would be usable for the image-text pairs that the machine learning model needs to match a user's input to images it can use to generate the result. While some filters have been applied, LAION notes that because Stable Diffusion is uncurated, the links that make up the data set "may lead to strongly discomforting and disturbing content for a human viewer." Based on LAION's own measurement, 2.9% of the 2.3 billion image-text pairs used by Stable Diffusion are "unsafe"—that is, roughly 68 million unsafe images. All of the technical documentation clearly states that this data set should be used only for research purposes. But Stable Diffusion is accessible to anyone, and Stability AI has made the model that powers it available for anyone to download and use for their own purposes. These propensities towards harm are frighteningly powerful. The risk this poses to children especially, in terms of what they might see or be exposed to, is unfathomable. What happens to our children when they are exposed to the worldview of a biased algorithm repeatedly and over time? What view of the world will they assume is "correct," and how will this inform their interactions with real people and society? Who is accountable for allowing this to happen?
  • Inappropriate sexualized representations of women and girls harm all users. Despite many public failings, Stable Diffusion continues to easily produce inappropriately sexualized representations of women and girls, even with prompts seeking images of women professionals. This perpetuates harmful stereotypes, unfair bias, unrealistic ideals of women's beauty and "sexiness," and incorrect beliefs around intimacy for humans of all genders. Numerous studies have shown that greater exposure to images that promote the objectification of women adversely affects the mental and physical health of girls and women. Notably, while this is an issue for all image-to-text generators, it is especially harmful with Stable Diffusion. This is because of the combination of an uncurated data set and minimal protections, such as a refusal to generate images when it detects prompts that violate the company's terms of service.
  • Stable Diffusion consistently and easily reinforces harmful stereotypes. While Stable Diffusion's July 2023 update aimed to prevent it from generating some of the most objectionable content, this remains a significant risk. Recent findings show continued reinforcement of harmful stereotypes, and the manner in which Stability AI has open-sourced the model allows anyone to remove those protections in new applications. A great resource for exploring this problem further can be found at Stable Bias. Relevant articles:
    - Tiku, N., Schaul, K., & Chen, S.Y. (2023, Nov. 1). How AI is crafting a world where our worst stereotypes are realized. Washington Post.
    - Crawford, A., & Smith, T. (2023, June 28). Illegal trade in AI child sex abuse images exposed. BBC.
    - Harlan, E., & Brunner, K. (2023, June 7). We are all raw material for AI. BR24.
    - Nicoletti, L., & Bass, D. (June 2023). Humans are biased. Generative AI is even worse. Bloomberg.
    - Vincent, J. (2023, Jan. 16). I art tools Stable Diffusion and Midjourney targeted with copyright lawsuit. The Verge.
    - Edwards, B. (2022, Sept. 21). Artist finds private medical record photos in popular AI training data set. Ars Technica.
    - Wiggers, K. (2022, Aug. 24). Deepfakes for all: Uncensored AI art model prompts ethics questions. TechCrunch.
    - Wiggers, K. (2022, Aug. 12). This startup is setting a DALL-E 2-like AI free, consequences be damned. TechCrunch.
  • Stable Diffusion's advanced inpainting and outpainting features present new risks. While innovative and useful in many contexts, the high degree of freedom to alter images means they can be used to perpetuate harms and falsehoods. Images that have been changed to, for example, modify, add, or remove clothing, or add additional people to an image in compromising ways, could be used to either directly harass or bully an individual, or to blackmail or exploit them. These features can also be used to create images that intentionally mislead and misinform others. For example, disinformation campaigns can remove objects or people from images or create images that stage false events.
  • Tools like Stable Diffusion pave the path to misinformation and disinformation. As with all generative AI tools, Stable Diffusion can easily generate or enable false and harmful content, both by reinforcing unfair biases, and by generating images that intentionally mislead or misinform others. Because Stability AI has taken minimal efforts to limit this, and images can be further manipulated with generative AI via in- and outpainting, false and harmful visual content can be generated at an alarming speed. We have already seen this in action. As OpenAI has noted in the context of DALL-E, as image generation matures, it "leaves fewer traces and indicators that outputs are AI-generated, making it easier to mistake generated images for authentic ones and vice versa." In other words, as these AI systems grow, it may become increasingly difficult to separate fact from fiction. This "Liar's Dividend" could erode trust to the point where democracy or civic institutions are unable to function.
  • The Stable Diffusion model is intended for research only, but Stability AI has made it available to everyone. Deep in its technical model card, Stability AI notes that Stable Diffusion is intended for research purposes only, and that "while the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases." Unfortunately, this information is currently nowhere to be found on Clipdrop or Dreamstudio, where Stable Diffusion is accessible to anyone.

Limitations

  • We did not receive participatory disclosures from Stability AI for Stable Diffusion. This assessment is based on publicly available information, our own testing and our review process.
  • Those who choose to use Stable Diffusion should educate themselves on best practices in prompting to ensure responsible use to the best extent possible. Resources like this that were created for DALL-E, another text-to-image generative AI model, can help.

Misuses

Stable Diffusion does have legal terms, but protections for children are unclear. One reason for this lack of clarity stems from the fact that Stable Diffusion can be accessed in three separate places hosted by Stability AI:

  1. Clipdrop, Stability AI's text-to-image editor, a simple interface more accessible to consumers. Clipdrop states in its Terms of Use that users are prohibited from downloading or producing content that, among other prohibited uses, infringes on "public order and morality." Children's rights are not specifically addressed in Clipdrop's terms. Clipdrop's terms state that minors must have permission from their legal representative to use this product.
  2. Dreamstudio, another image editor from Stability AI that extends beyond text-to-image prompting with inpainting, outpainting, and image-to-image generation. Dreamstudio's Terms of Service contain an expanded list of prohibited uses and introduce Community Guidelines, which note that "contributions must be safe, legal, and in accordance with these Terms." Dreamstudio's terms state that minors are prohibited from using this product.
  3. Stability.ai's developer platform, which has its own, more exhaustive Acceptable Use Policy. These terms specifically prohibit use of Stability Technology for, among other prohibited uses, "Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content." Stability AI's terms state that minors are prohibited from using this product.

Because Stable Diffusion can be accessed from each of these tools, it is unclear which set of terms may be enforced, why there is a discrepancy between these terms, how these terms might be enforced, and by whom.

 

Common Sense AI Principles Assessment

Our assessment of how well this product aligns with each AI Principle .

Common Crawl</a> repository that has one or more text labels that would be usable for the image-text pairs that the machine learning model needs to match a user's input to images it can use to generate the result. While some filters have been applied, LAION notes that because it is uncurated, the links that make up the data set "may lead to strongly discomforting and disturbing content for a human viewer." Based on LAION's own measurement, 2.9% of the 2.3 billion image-text pairs used by Stable Diffusion are "unsafe"—that is, roughly 68 million unsafe images. All of the technical documentation clearly states that this data set should be used only for research purposes. But Stable Diffusion is available to anyone. The risk this poses to children especially, in terms of what they might see or be exposed to, is unfathomable.</li> </ul> <p>&nbsp;</p> <h3>Important limitations and considerations</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">While Stable Diffusion is very easy and intuitive to use, the ethical risks described throughout this review make this ease of use even more problematic.</li> <li style="line-height:1.5;margin-bottom:5px;">Those who choose to use Stable Diffusion should educate themselves on best practices in prompting to ensure responsible use to the best extent possible. <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://help.openai.com/en/articles/6582391-how-can-i-improve-my-prompts-with-dall-e">Resources like this</a> that were created for DALL-E, another text-to-image generative AI model, can help.</li> <li style="line-height:1.5;margin-bottom:5px;"> <p>Stable Diffusion does have legal terms, but protections for children are unclear. One reason for this lack of clarity stems from the fact that Stable Diffusion can be accessed in three separate places hosted by Stability AI:<br>&nbsp;</p> <ol> <li style="line-height:1.5;margin-bottom:5px;">Clipdrop, Stability AI's text-to-image editor, a simple interface more accessible to consumers. Clipdrop states in its Terms of Use that users are prohibited from downloading or producing content that, among other prohibited uses, infringes on "public order and morality." Children's rights are not specifically addressed in Clipdrop's terms.</li> <li style="line-height:1.5;margin-bottom:5px;">Dreamstudio, another image editor from Stability AI that extends beyond text-to-image prompting with inpainting, outpainting, and image-to-image generation. Dreamstudio's Terms of Service contain an expanded list of prohibited uses and introduce Community Guidelines which note "contributions must be safe, legal, and in accordance with these Terms."</li> <li style="line-height:1.5;margin-bottom:5px;">Stability.ai's developer platform, which has its own, more exhaustive Acceptable Use Policy. These terms specifically prohibit use of Stability Technology for, among other prohibited uses, "Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content."</li> </ol> <p><br>Because Stable Diffusion can be accessed from each of these tools, it is unclear which set of terms may be enforced, why there is a discrepancy between these terms, how these terms might be enforced, and by whom.</p> </li> </ul> ">
exploring the Stable Bias resource instead</a>.</li> </ul> <p>&nbsp;</p> <h3><strong>Violates this AI Principle</strong></h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">The risk of exposure to unsafe content generated by Stable Diffusion is so high that <strong>we do not recommend direct use of this tool in any learning environment</strong>.</li> </ul> <p>&nbsp;</p> <h3><strong>Important limitations and considerations</strong></h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">Stable Diffusion is not designed for educational use and is not aligned with content standards.</li> <li style="line-height:1.5;margin-bottom:5px;">Users should not attempt to use Stable Diffusion to output images to visualize any process or scene that requires accuracy.</li> <li style="line-height:1.5;margin-bottom:5px;">It is extremely easy when using Stable Diffusion to unwittingly produce images that reinforce unfair bias and stereotypes.</li> </ul> ">
adversely affects the mental and physical health</a> of girls and women. Notably, while this is an issue for all image-to-text generators, it is especially harmful with Stable Diffusion. This is because of the combination of an uncurated data set and minimal protections, such as a refusal to generate images when it detects prompts that violate the company's terms of service.</li> <li style="line-height:1.5;margin-bottom:5px;">Even when instructed to do otherwise, Stable Diffusion is susceptible to generating outputs that <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.washingtonpost.com/technology/interactive/2023/ai-generated-images-bias-racism-sexism-stereotypes/">perpetuate harmful stereotypes</a>, especially regarding <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.bloomberg.com/graphics/2023-generative-ai-bias/">race and gender</a>. A great resource for exploring this further can be found at <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://huggingface.co/spaces/society-ethics/StableBias">Stable Bias</a>. Our own testing confirmed this and the ease with which these outputs are generated. Some examples of what we found include: <ul> <li style="line-height:1.5;margin-bottom:5px;">Stable Diffusion attributed being "attractive" to White faces, "emotional" to female faces, "thug" to Black male faces, "terrorist" to stereotypes of Middle Eastern male faces, and "housekeeper" to Black and Brown females.</li> <li style="line-height:1.5;margin-bottom:5px;">When asked to generate images of a "poor White person," Stable Diffusion would often generate images of Black men. When asked to pair non-White ethnicities with wealth, Stable Diffusion struggled to do so. Instead, it generated images associated with poverty or severely degraded images.</li> <li style="line-height:1.5;margin-bottom:5px;">Stable Diffusion reflected and amplified statistical gender stereotypes for occupations (e.g., only female flight attendants and stay-at-home parents, male chefs, female cooks, male software developers).</li> </ul> </li> </ul> <p>&nbsp;</p> <h3>Important limitations and considerations</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">Ensuring that prompts are specific and "grounded" can help reduce certain biases in underspecified prompts, though research indicates that bias can still persist.</li> <li style="line-height:1.5;margin-bottom:5px;">Stable Diffusion struggles to represent ideas and people that do not appear in its training data, leading to disparate performance. This bias requires some users, especially those in marginalized groups, to be very specific in their prompts, while others find the tool intuitively tailored to their needs. This can also result in inferior images for outputs describing concepts outside of the training data set.</li> <li style="line-height:1.5;margin-bottom:5px;">It is very easy to unwittingly produce images that reinforce unfair bias and stereotypes using Stable Diffusion. This can shape users' beliefs and worldview about what is "good" and "normal."</li> </ul> ">
this in <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.technologyreview.com/2021/09/13/1035449/ai-deepfake-app-face-swaps-women-into-porn/">action. As OpenAI has <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://github.com/openai/dalle-2-preview/blob/main/system-card_04062022.md#model">noted</a> in the context of DALL-E, as image generation matures, it "leaves fewer traces and indicators that outputs are AI-generated, making it easier to mistake generated images for authentic ones and vice versa." In other words, as these AI systems grow, it may become increasingly difficult to separate fact from fiction. This "<a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://scholarship.law.bu.edu/faculty_scholarship/640/">Liar's Dividend</a>" could erode trust to the point where democracy or civic institutions are unable to function.</li> </ul> <p>&nbsp;</p> <h3>Important limitations and considerations</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">While this would be a violation of Stable Diffusion's terms of service, it would be very easy to generate images that could be used in misinformation and disinformation campaigns. Many of the organizations responsible for text-to-image generative AI models take steps to avoid the potential to depict public figures. By contrast, Stable Diffusion is capable of generating new content that depicts public figures. This makes it very easy to use it to create <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://techcrunch.com/2022/08/24/deepfakes-for-all-uncensored-ai-art-model-prompts-ethics-questions/">deepfakes. </ul> ">
highly sensitive personally identifiable information</a> (PII).</li> <li style="line-height:1.5;margin-bottom:5px;">Many of the organizations responsible for text-to-image generative AI models take steps to avoid the potential to depict public figures. By contrast, Stable Diffusion is capable of generating new content that depicts public figures. This makes it very easy to use it to create <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://techcrunch.com/2022/08/24/deepfakes-for-all-uncensored-ai-art-model-prompts-ethics-questions/">deepfakes. </ul> <p>&nbsp;</p> <h3>Important limitations and considerations</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">Clipdrop's terms state that minors must have permission from their legal representative. Both Dreamstudio and Stability AI's terms state that minors are prohibited from using the services. it is unclear which set of terms may be enforced, why there is a discrepancy between these terms, how these terms might be enforced, and by whom.</li> <li style="line-height:1.5;margin-bottom:5px;">Stable Diffusion was not designed with student privacy in mind. Any student using the service will be subject to the same policies as any other consumer.</li> <li style="line-height:1.5;margin-bottom:5px;">Because of its age policy, Stable Diffusion is not required to comply with (and to our knowledge, does not comply with) important protections such as the <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.ftc.gov/legal-library/browse/rules/childrens-online-privacy-protection-rule-coppa">Children's Online Privacy and Protection Act (COPPA)</a>, the <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"/kids-action/about-us/our-issues/digital-life/sopipa">Student Online Personal Information Protection Act (SOPIPA)</a> or the <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www2.ed.gov/policy/gen/guid/fpco/ferpa/index.html">Family Educational Rights and Privacy Act (FERPA)</a>. Stable Diffusion is compliant with the <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://gdpr.eu/">General Data Protection Regulation (GDPR)</a>.</li> </ul> <p><em>This review is distinct from Common Sense's privacy </em><a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://privacy.commonsense.org/resource/evaluation-process">evaluations and </em><a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://privacy.commonsense.org/resource/privacy-ratings">ratings, which evaluate privacy policies to help parents and educators make sense of the complex policies and terms related to popular tools used in homes and classrooms across the country.</em></p> ">
https://www.bbc.com/news/uk-65932372" class="link"&gt;These images have then been sold online. While Stable Diffusion's July 2023 update aimed to prevent it from generating some of the most objectionable content, the open source nature of the model allows for easy removal of those protections in new applications.</li> <li style="line-height:1.5;margin-bottom:5px;">Stable Diffusion's "view" of the world can shape impressionable minds, and with little accountability. Even when instructed to do otherwise, Stable Diffusion is susceptible to generating outputs that <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.washingtonpost.com/technology/interactive/2023/ai-generated-images-bias-racism-sexism-stereotypes/">perpetuate harmful stereotypes</a>, especially regarding <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.bloomberg.com/graphics/2023-generative-ai-bias/">race and gender</a>. We confirmed this repeatedly with our own testing. These behaviors reflect both the way in which the model was trained and—critically—the choice of the data set used to train it. LAION 5B, the data set that powers Stable Diffusion, is uncurated. This means that it contains every image found in the <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://commoncrawl.org/">Common Crawl</a> repository that has one or more text labels that would be usable for the image-text pairs that the machine learning model needs to match a user's input to images it can use to generate the result. While some filters have been applied, LAION notes that because itStable Diffusion is uncurated, the links that make up the data set "may lead to strongly discomforting and disturbing content for a human viewer." Based on LAION's own measurement, 2.9% of the 2.3 billion image-text pairs used by Stable Diffusion are "unsafe"—that is, roughly 68 million unsafe images. All of the technical documentation clearly states that this data set should be used only for research purposes. But Stable Diffusion is accessible to anyone, and Stability AI has made the model that powers it available for anyone to download and use for their own purposes. These propensities towards harm are frighteningly powerful. The risk this poses to children especially, in terms of what they might see or be exposed to, is unfathomable. What happens to our children when they are exposed to the worldview of a biased algorithm repeatedly and over time? What view of the world will they assume is "correct," and how will this inform their interactions with real people and society? Who is accountable for allowing this to happen?</li> <li style="line-height:1.5;margin-bottom:5px;">Stable Diffusion has not been designed in any specific way to protect children. Stable Diffusion has been found to be able to output images that can emotionally and psychologically harm users, perpetuate harmful stereotypes, and promote mis/disinformation.</li> <li style="line-height:1.5;margin-bottom:5px;">At the time of this review, there are no age or terms of service gates when signing up to use Stable Diffusion on Clipdrop, Dreamstudio, or—importantly—Stability AI's developer platform where users can easily access the full model. Unless a user seeks out the terms for themselves, they do not know what is and isn't allowed. This also means that there are no protections for children and teens from general use of the product.</li> </ul> ">
  • People First

    very little

    AI should Put People First. See our criteria for this AI Principle.

    Violates this AI Principle

    • LAION 5B, the data set that powers Stable Diffusion, is uncurated. This means that it contains every image found in the Common Crawl repository that has one or more text labels that would be usable for the image-text pairs that the machine learning model needs to match a user's input to images it can use to generate the result. While some filters have been applied, LAION notes that because it is uncurated, the links that make up the data set "may lead to strongly discomforting and disturbing content for a human viewer." Based on LAION's own measurement, 2.9% of the 2.3 billion image-text pairs used by Stable Diffusion are "unsafe"—that is, roughly 68 million unsafe images. All of the technical documentation clearly states that this data set should be used only for research purposes. But Stable Diffusion is available to anyone. The risk this poses to children especially, in terms of what they might see or be exposed to, is unfathomable.

     

    Important limitations and considerations

    • While Stable Diffusion is very easy and intuitive to use, the ethical risks described throughout this review make this ease of use even more problematic.
    • Those who choose to use Stable Diffusion should educate themselves on best practices in prompting to ensure responsible use to the best extent possible. Resources like this that were created for DALL-E, another text-to-image generative AI model, can help.
    • Stable Diffusion does have legal terms, but protections for children are unclear. One reason for this lack of clarity stems from the fact that Stable Diffusion can be accessed in three separate places hosted by Stability AI:
       

      1. Clipdrop, Stability AI's text-to-image editor, a simple interface more accessible to consumers. Clipdrop states in its Terms of Use that users are prohibited from downloading or producing content that, among other prohibited uses, infringes on "public order and morality." Children's rights are not specifically addressed in Clipdrop's terms.
      2. Dreamstudio, another image editor from Stability AI that extends beyond text-to-image prompting with inpainting, outpainting, and image-to-image generation. Dreamstudio's Terms of Service contain an expanded list of prohibited uses and introduce Community Guidelines which note "contributions must be safe, legal, and in accordance with these Terms."
      3. Stability.ai's developer platform, which has its own, more exhaustive Acceptable Use Policy. These terms specifically prohibit use of Stability Technology for, among other prohibited uses, "Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content."


      Because Stable Diffusion can be accessed from each of these tools, it is unclear which set of terms may be enforced, why there is a discrepancy between these terms, how these terms might be enforced, and by whom.

  • Learning

    very little

    AI should Promote Learning. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • While Stable Diffusion is not designed for use in schools, educators could use it in their classrooms with strong oversight. In particular, Stable Diffusion can be a useful tool in teaching students about how to recognize and question societal biases. Rather than using Stable Diffusion itself for this purpose, however, we recommend exploring the Stable Bias resource instead.

     

    Violates this AI Principle

    • The risk of exposure to unsafe content generated by Stable Diffusion is so high that we do not recommend direct use of this tool in any learning environment.

     

    Important limitations and considerations

    • Stable Diffusion is not designed for educational use and is not aligned with content standards.
    • Users should not attempt to use Stable Diffusion to output images to visualize any process or scene that requires accuracy.
    • It is extremely easy when using Stable Diffusion to unwittingly produce images that reinforce unfair bias and stereotypes.
  • Fairness

    very little

    AI should Prioritize Fairness. See our criteria for this AI Principle.

    Violates this AI Principle

    • Despite many public failings, Stable Diffusion continues to produce inappropriately sexualized representations of women and girls, even with neutral prompts or prompts seeking images of women professionals. Numerous studies have shown that greater exposure to images that promote the objectification of women adversely affects the mental and physical health of girls and women. Notably, while this is an issue for all image-to-text generators, it is especially harmful with Stable Diffusion. This is because of the combination of an uncurated data set and minimal protections, such as a refusal to generate images when it detects prompts that violate the company's terms of service.
    • Even when instructed to do otherwise, Stable Diffusion is susceptible to generating outputs that perpetuate harmful stereotypes, especially regarding race and gender. A great resource for exploring this further can be found at Stable Bias. Our own testing confirmed this and the ease with which these outputs are generated. Some examples of what we found include:
      • Stable Diffusion attributed being "attractive" to White faces, "emotional" to female faces, "thug" to Black male faces, "terrorist" to stereotypes of Middle Eastern male faces, and "housekeeper" to Black and Brown females.
      • When asked to generate images of a "poor White person," Stable Diffusion would often generate images of Black men. When asked to pair non-White ethnicities with wealth, Stable Diffusion struggled to do so. Instead, it generated images associated with poverty or severely degraded images.
      • Stable Diffusion reflected and amplified statistical gender stereotypes for occupations (e.g., only female flight attendants and stay-at-home parents, male chefs, female cooks, male software developers).

     

    Important limitations and considerations

    • Ensuring that prompts are specific and "grounded" can help reduce certain biases in underspecified prompts, though research indicates that bias can still persist.
    • Stable Diffusion struggles to represent ideas and people that do not appear in its training data, leading to disparate performance. This bias requires some users, especially those in marginalized groups, to be very specific in their prompts, while others find the tool intuitively tailored to their needs. This can also result in inferior images for outputs describing concepts outside of the training data set.
    • It is very easy to unwittingly produce images that reinforce unfair bias and stereotypes using Stable Diffusion. This can shape users' beliefs and worldview about what is "good" and "normal."
  • Social Connection

    very little

    AI should Help People Connect. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • With close monitoring and oversight, Stable Diffusion can offer a unique way to boost social interaction and understanding. It can enable those with limited artistic talent to convey their ideas creatively and aid in visual storytelling.

     

    Important limitations and considerations

    • It is very easy to use Stable Diffusion to generate images that can harm individuals and groups. On their own, generated images can reinforce harmful stereotypes about identity and occupation, and dehumanize individuals or groups. These could further be used to incite or promote hatred or disseminate disinformation. This can happen with an ease and speed that creates special concern for use of Stable Diffusion, regardless of whether these activities are against the terms of service.
    • Dreamstudio, one of Stability AI's image editors, extends beyond text-to-image prompting with features like inpainting, outpainting, and image-to-image generation. These features present new risks. While innovative and useful in many contexts, the high degree of freedom to alter images means that they can be used to perpetuate harms and falsehoods. Images that have been changed to, for example, modify, add, or remove clothing, or add additional people to an image in compromising ways, could be used to either directly harass or bully an individual, or to blackmail or exploit them. These features can also be used to create images that intentionally mislead and misinform others. For example, misinformation campaigns can remove objects or people from images or create images that stage false events.
  • Trust

    very little

    AI should Be Trustworthy. See our criteria for this AI Principle.

    Violates this AI Principle

    • At the time of this review, there are no age or terms of service gates when signing up to use Stable Diffusion on Clipdrop, Dreamstudio, or–importantly–Stability AI's developer platform where users can easily access the full model. Unless a user seeks out the terms for themselves, they do not know what is and isn't allowed. This also means that there are no protections for children and teens from general use of the product, access to the underlying model, or any open beta testing.
    • Stable Diffusion does not appear to add watermarks to users' images, which removes barriers to the spread of misinformation and harmful stereotypes.
    • As with all generative AI tools, Stable Diffusion can easily generate or enable false and harmful content, both by reinforcing unfair biases, and by generating images that intentionally mislead or misinform others. Because Stability AI has taken minimal efforts to limit this, and images can be further manipulated with generative AI via in- and outpainting, false and harmful visual content can be generated at an alarming speed. We have already seen this in action. As OpenAI has noted in the context of DALL-E, as image generation matures, it "leaves fewer traces and indicators that outputs are AI-generated, making it easier to mistake generated images for authentic ones and vice versa." In other words, as these AI systems grow, it may become increasingly difficult to separate fact from fiction. This "Liar's Dividend" could erode trust to the point where democracy or civic institutions are unable to function.

     

    Important limitations and considerations

    • While this would be a violation of Stable Diffusion's terms of service, it would be very easy to generate images that could be used in misinformation and disinformation campaigns. Many of the organizations responsible for text-to-image generative AI models take steps to avoid the potential to depict public figures. By contrast, Stable Diffusion is capable of generating new content that depicts public figures. This makes it very easy to use it to create deepfakes.
  • Data Use

    very little

    AI should Protect Our Privacy. See our criteria for this AI Principle.

    Violates this AI Principle

    • Because the data set used to power Stable Diffusion is uncurated, it has generated content that includes images with highly sensitive personally identifiable information (PII).
    • Many of the organizations responsible for text-to-image generative AI models take steps to avoid the potential to depict public figures. By contrast, Stable Diffusion is capable of generating new content that depicts public figures. This makes it very easy to use it to create deepfakes.

     

    Important limitations and considerations

    • Clipdrop's terms state that minors must have permission from their legal representative. Both Dreamstudio and Stability AI's terms state that minors are prohibited from using the services. it is unclear which set of terms may be enforced, why there is a discrepancy between these terms, how these terms might be enforced, and by whom.
    • Stable Diffusion was not designed with student privacy in mind. Any student using the service will be subject to the same policies as any other consumer.
    • Because of its age policy, Stable Diffusion is not required to comply with (and to our knowledge, does not comply with) important protections such as the Children's Online Privacy and Protection Act (COPPA), the Student Online Personal Information Protection Act (SOPIPA) or the Family Educational Rights and Privacy Act (FERPA). Stable Diffusion is compliant with the General Data Protection Regulation (GDPR).

    This review is distinct from Common Sense's privacy evaluations and ratings, which evaluate privacy policies to help parents and educators make sense of the complex policies and terms related to popular tools used in homes and classrooms across the country.

  • Kids' Safety

    very little

    AI should Keep Kids & Teens Safe. See our criteria for this AI Principle.

    Violates this AI Principle

    • Stable Diffusion has been used used to create lifelike images of child sexual abuse, including of the sexual abuse of babies and toddlers. < a href="https://www.bbc.com/news/uk-65932372" class="link">These images have then been sold online. While Stable Diffusion's July 2023 update aimed to prevent it from generating some of the most objectionable content, the open source nature of the model allows for easy removal of those protections in new applications.
    • Stable Diffusion's "view" of the world can shape impressionable minds, and with little accountability. Even when instructed to do otherwise, Stable Diffusion is susceptible to generating outputs that perpetuate harmful stereotypes, especially regarding race and gender. We confirmed this repeatedly with our own testing. These behaviors reflect both the way in which the model was trained and—critically—the choice of the data set used to train it. LAION 5B, the data set that powers Stable Diffusion, is uncurated. This means that it contains every image found in the Common Crawl repository that has one or more text labels that would be usable for the image-text pairs that the machine learning model needs to match a user's input to images it can use to generate the result. While some filters have been applied, LAION notes that because itStable Diffusion is uncurated, the links that make up the data set "may lead to strongly discomforting and disturbing content for a human viewer." Based on LAION's own measurement, 2.9% of the 2.3 billion image-text pairs used by Stable Diffusion are "unsafe"—that is, roughly 68 million unsafe images. All of the technical documentation clearly states that this data set should be used only for research purposes. But Stable Diffusion is accessible to anyone, and Stability AI has made the model that powers it available for anyone to download and use for their own purposes. These propensities towards harm are frighteningly powerful. The risk this poses to children especially, in terms of what they might see or be exposed to, is unfathomable. What happens to our children when they are exposed to the worldview of a biased algorithm repeatedly and over time? What view of the world will they assume is "correct," and how will this inform their interactions with real people and society? Who is accountable for allowing this to happen?
    • Stable Diffusion has not been designed in any specific way to protect children. Stable Diffusion has been found to be able to output images that can emotionally and psychologically harm users, perpetuate harmful stereotypes, and promote mis/disinformation.
    • At the time of this review, there are no age or terms of service gates when signing up to use Stable Diffusion on Clipdrop, Dreamstudio, or—importantly—Stability AI's developer platform where users can easily access the full model. Unless a user seeks out the terms for themselves, they do not know what is and isn't allowed. This also means that there are no protections for children and teens from general use of the product.
  • Transparency & Accountability

    very little

    AI should Be Transparent & Accountable. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • The open source nature of Stable Diffusion means that there is significant transparency.
    • Users are able to exert human control over images they produce with Stable Diffusion by modifying prompts to effect change in the generated outputs.

     

    Violates this AI Principle

    • The effects of bias and potential harm from images produced by Stable Diffusion can vary based on context, complicating the assessment and mitigation process during image creation. Additionally, content filters can fail to fully capture images that are ethically dubious or violate Stable Diffusion's guidelines, because the potential misuse is more a function of the context in which the image can be used (e.g., disinformation, harassment, bullying, etc.) and not the image itself. Currently, the challenge of identifying deepfakes and determining whether images have been created using Stable Diffusion and products like it remains an unresolved issue, leaving a gap in our ability to mitigate the potential consequences of harmful situations when they occur in the real world. Importantly, harm doesn't require a bad actor intending to misuse the product. For example, something intended to be shared in private may be innocuous unless and until it is seen publicly. This makes it incredibly difficult, if not impossible, for programmatic efforts like policy enforcement, prompt refusals, and even human review to catch and stop content that looks fine but ultimately is not.

     

    Important limitations and considerations

    • The available transparency information on popular repositories is not easy for a non-technical audience to understand. This makes it not at all obvious that the creators of Stable Diffusion intend for it to be used as a research tool, not a consumer tool.
    • Stable Diffusion can, and has, caused real harm to people, and is not subject to meaningful human control in these instances.
    • There are insufficient mechanisms for remediation when harm does happen.


 

 

Additional Resources

Edtech Ratings

Apps and websites for making posters and collages

For Families

Helping kids navigate the world of artificial intelligence 

Free Lessons

AI Literacy for Grades 6–12

 

 

See All AI Reviews

See Next Review

 

Common Sense is the nation's leading nonprofit organization dedicated to improving the lives of all kids and families by providing the trustworthy information, education, and independent voice they need to thrive in the 21st century.

We're a nonprofit. Support our work

  • About
    • Column 1
      • Our Work and Impact
      • How We Work
      • Diversity & Inclusion
      • Meet Our Team
      • Board of Directors
      • Board of Advisors
      • Our Partners
      • Our Offices
      • Press Room
      • Annual Report
      • Contact Us
  • Learn More
    • Column 1
      • Common Sense Media
      • Common Sense Education
      • Digital Citizenship Program
      • Family Engagement Program
      • Privacy Program
      • Research Program
      • Advocacy Program
  • Get Involved
    • Column 1
      • Donate
      • Join as a Parent
      • Join as an Educator
      • Join as an Advocate
      • Get Our Newsletters
      • Request a Speaker
      • Partner With Us
      • Events
      • Apply for Free Internet
      • We're Hiring

Follow Common Sense Media

  • Facebook
  • Twitter
  • Instagram
  • YouTube
  • LinkedIn
Contact us / Privacy / / Terms of use / Community guidelines
© Common Sense Media. All rights reserved. Common Sense and other associated names and logos are trademarks of Common Sense Media, a 501(c)(3) nonprofit organization (FEIN: 41-2024986).
Image with a screenshot of Aura Parental Controls and three KiwiCo activity crates

Membership has its perks

Annual members enjoy access to special offers for Aura Parental Controls and KiwiCo hands-on activity kits.

Join now