A a new open source AI image generator capable of producing realistic pictures from any text prompt, saw astonishingly rapid uptake in its first week. AI for stability Stable diffusion, high-fidelity yet capable of running on off-the-shelf consumer hardware, is now used by art generation services such as Artbreeder, Pixelz.ai, and others. But the unfiltered nature of the model means that not all use has been completely superior.
For the most part, the use cases are above board. For example, NovelAI is experimenting with Stable Diffusion to create art that can accompany AI-generated stories created by users of its platform. Midjourney released a beta version that uses Stable Diffusion for greater photorealism.
But Stable Diffusion is also used for less palatable purposes. On the infamous 4chan message board, where the model leaked early, several threads have been devoted to AI-generated celebrity nude art and other forms of generated pornography.
Emad Mostake, CEO of Stability AI, called it “unfortunate” that the model was leaked to 4chan and stressed that the company is working with “leading ethics and technology experts” on safety and other mechanisms around responsible release. One of these mechanisms is an adjustable AI tool, the Safety Classifier, included in the comprehensive Stable Diffusion software suite, which attempts to detect and block offensive or unwanted images.
However, the safety classifier – while it is on by default – can be disabled.
Steady-state diffusion is very new territory. Other AI art generation systems, such as OpenAI’s DALL-E 2, have implemented strict filters for pornographic material. ( Permissive (for open source Stable Diffusion prohibits certain applications, such as exploitation of minors, but the model itself is not restricted on a technical level.) Also, many do not have the ability to create art of public figures, unlike Stable Diffusion. These two possibilities can be risky when combined, allowing bad actors to create pornographic “deep fakes” that – worst case scenario – can perpetuate abuse or implicate someone in a crime they did not commit.

Emma Watson deepfake created by Stable Diffusion and posted on 4chan.
Women, unfortunately, are the most likely victims of this. Survey conducted in 2019 revealed that from 90% to 95% of non-consensual deepfakes, about 90% are of women. This bodes ill for the future of these AI systems, according to Ravit Dothan, an AI ethicist at the University of California, Berkeley.
“I worry about other effects of synthetic images of illegal content — that it will exacerbate the illegal behavior being described,” Dothan told TechCrunch via email. “For example, a synthetic child [exploitation] increasing the creation of an authentic child [exploitation]? Will it increase the number of attacks by pedophiles?’
Montreal AI Ethics Institute Principal Investigator Abhishek Gupta shares this sentiment. “We really need to think about the life cycle of the AI system, which includes use and monitoring after deployment, and think about how we can imagine controls that can minimize harm even in worst-case scenarios,” he said. “This is especially true with a powerful ability [like Stable Diffusion] falls into the wild, which can cause real trauma to those against whom such a system might be used, for example by creating unwanted content in the victim’s likeness.
Something like a preview unfolded last year when, on the advice of a nurse, a father took pictures of his toddler’s swollen genital area and sent them to the nurse’s iPhone. The photo was automatically backed up to Google Photos and was flagged by the company’s AI filters as child sexual abuse material, leading to the deactivation of the man’s account and an investigation by the San Francisco Police Department.
Experts like Dotan say that if a legitimate photo can make such a system difficult to detect, there’s no reason deep fakes generated by a system like Stable Diffusion can’t — and at scale.
“The artificial intelligence systems that people create, even when they have the best intentions, can be used in harmful ways that they don’t foresee and can’t prevent,” Dothan said. “I think developers and researchers often underestimate this point.”
Of course, the technology to create deepfakes has been around for a while, AI-powered or otherwise. And 2020 report from deepfake detection company Sensity found that hundreds of explicit deepfake videos featuring celebrities are uploaded to the world’s biggest porn websites every month; the report estimates the total number of deep fakes online at around 49,000, over 95% of which are pornography. Actresses including Emma Watson, Natalie Portman, Billie Eilish and Taylor Swift have been targeted by deepfakes since AI-based face-swapping tools went mainstream a few years ago, and some, including Kristen Bell, have spoken out against what they believe for sexual exploitation.
But Stable Diffusion represents a newer generation of systems that can create incredibly—if not perfectly—convincing fake images with minimal user input. It’s also easy to install, requiring no more than a few installation files and a graphics card costing a few hundred dollars for the high-end. Even more efficient versions of the system are being worked on that can run on the M1 MacBook.

Kylie Kardashian Deepfake posted on 4chan.
Sebastian Burns, Ph.D. researcher in the artificial intelligence group at Queen Mary University of London, believes that automation and the ability to scale up personalized image generation are the big differences with systems like Stable Diffusion – and the main problems. “Most harmful images can now be produced using conventional methods, but they are manual and require a lot of effort,” he said. “A model that can produce near-photorealistic footage can give way to customized extortion attacks against individuals.”
Burns fears that private photos deleted from social media could be used to condition Stable Diffusion or another similar model to generate targeted pornographic images or images depicting illegal acts. There is certainly a precedent. After reporting on the rape of an eight-year-old Kashmiri girl in 2018, Indian investigative journalist Rana Ayub got up the target of Indian nationalist trolls, some of whom created deep fake porn with her face on another man’s body. The deepfake was shared by the leader of the nationalist political party BJP and the harassment Ayub received as a result got so bad that the UN had to intervene.
“Stable Diffusion offers enough customization to send automated threats against individuals who either pay or risk having fake but potentially harmful footage posted,” Burns continued. “We’re already seeing people being blackmailed after their webcam has been accessed remotely. This penetration step may no longer be necessary.
Since Stable Diffusion is out in the wild and already being used to generate pornography – some without consent – it may become incumbent on image hosts to take action. TechCrunch reached out to one of the major adult content platforms, OnlyFans, but did not hear back by the time of publication. A spokesperson for Patreon, which also allows adult content, noted that the company has a policy against deep fakes and prohibits images that “repurpose celebrity likenesses and place non-adult content in an adult context.”
However, if history is any indication, enforcement it is likely to be uneven—in part because few laws specifically protect against deep counterfeiting as it relates to pornography. And even if the threat of legal action has dragged down some sites dedicated to objectionable AI-generated content, there’s nothing stopping new ones from popping up.
In other words, says Gupta, it’s a brave new world.
“Creative and malicious users can abuse the capabilities [of Stable Diffusion] to generate subjectively objectionable content at scale using minimal inference resources—which is cheaper than training the entire model—and then post it to places like Reddit and 4chan to drive traffic and attention is hacked,” Gupta said. “There is a lot at stake when such capabilities escape ‘into the wild’, where controls such as API rate limits, safety controls on the types of output returned by the system are no longer applicable.”