• FryAI
  • Posts
  • The Dark Side Of AI: Inside The AI-Generated CSAM Problem

The Dark Side Of AI: Inside The AI-Generated CSAM Problem

Welcome to this week’s Deep-fried Dive with Fry Guy! In these long-form articles, Fry Guy conducts in-depth analyses of cutting-edge artificial intelligence (AI) developments and developers. Today, Fry Guy dives into the dangers of AI-generated CSAM. We hope you enjoy!

*Notice: We do not receive any monetary compensation from the people and projects we feature in the Sunday Deep-fried Dives with Fry Guy. We explore these projects and developers solely to reveal interesting and cutting-edge AI developments and uses.*


🤯 MYSTERY LINK 🤯

(The mystery link can lead to ANYTHING AI-related. Tools, memes, and more…)

In May, the Department of Justice (DOJ) made its first arrest related to AI-generated child sexual abuse material (CSAM). 

The material was created by a 42-year-old software engineer in Holmen, Wisconsin. A report determined that he used Stable Diffusion 1.5 to create the explicit material, which has fewer ethical safeguards than that of popular AI image tools like Midjourney or DALL-E 3. After creating these images, the individual used them to try to lure minors. He is now being charged with four counts of transferring obscene material to a minor under the age of 16. 

Unfortunately, this case is one of many. 21 teenage girls in Spain made headlines last fall while AI-generated nudes of them circulated online. In March, a group of middle school children in Los Angeles went to school only to find out their classmates were spreading pictures of them with realistically nude bodies, created with an AI-powered app. The stories continue to pile up, many of which never get told.

In today’s article, we are going to dive into one of the darkest areas of AI: its potential to create CSAM. We hope to outline the problems associated with this dark part of AI and offer some perspective on the issue. We recognize this issue is sensitive, and that’s exactly why it must be talked about!

WIDESPREAD HARM

CSAM has risen over 20% in the past three years, with 36,210,368 reports of suspected child exploitation filed in 2023. Since 2020, the number of urgent, time-sensitive reports (where a child is at risk of harm) has grown by more than 140%. In 2023, the CyberTipline received over 186,000 reports regarding online enticement—a more than 300% increase from just 2021. The main reason for the explosion in exploitation? Generative AI.

A recent report from the Internet Watch Foundation (IWF) revealed a disturbing rise in AI-generated CSAM online. In a 30-day review of a dark web forum used to share CSAM, the IWF found a total of 3,512 CSAM images and videos created with AI. This is a 17% increase in CSAM images and videos from a similar review conducted in the fall of 2023, which reveals how much worse this problem is becoming as AI’s capabilities improve and unrestricted tools continue to proliferate the web.

“Put simply, CSAM generated by AI is still CSAM.”

 -General Lisa Monaco, Deputy Attorney at the DOJ

The proliferation of AI-generated CSAM has been one of the most devastating consequences of generative AI, and efforts have been made from tech companies to combat this issue. Last summer, seven of the largest AI companies in the U.S. signed a public pledge to abide by a handful of ethical and safety guidelines surrounding CSAM. As a result, many of the image and video generators provided by these large companies have tightened restrictions on mimicking people and depicting harmful events or situations. However, these companies have no control over the numerous smaller AI programs that have littered the internet, often free to use. 

U.S. federal law defines CSAM as any visual depiction of sexually explicit conduct involving a minor—which may include “digital or computer generated images indistinguishable from an actual minor” as well as “images created, adapted, or modified, but appear to depict an identifiable, actual minor.” So why not just arrest those who engage in creating and spreading CSAM, like the DOJ did with that Wisconsin software engineer? This seems like a straightforward solution, but the rabbit hole gets much deeper.

The DOJ arrest of the Wisconsin software engineer we discussed at the beginning should not be looked at as the standard for what will realistically happen to all who create and spread such material, but it serves as an example that using AI for such purposes is not to be tolerated and nonetheless deserves to be punished. Arrests of such kind are not to be realistically suspected in all cases because finding a way to accurately track who is responsible for such material and who can be legally punished has been extremely difficult, and this process is only getting harder as generative AI gets more realistic.

Because CSAM is often shared on the dark web, it is difficult to track unless it is reported. Further, if AI-generated CSAM is found, it’s almost impossible to figure out who created it. Most of the tools are used anonymously and the material is shared so much that the original source is incredibly difficult to track. Synthetic watermark solutions have been tried, but unless there is a strict regulatory effort over this procedure, there will always be unrestricted tools that emerge regardless. Moreover, since spreading CSAM is already an illegal activity, banning such tools is not going to prevent criminals from finding a way to use stable diffusion models for harmful purposes. The cat is already out of the bag, so to speak, so we are left dealing with the problem as it is.

The prevention problem doesn’t stop there. Even if there was a good way to track the source of AI-generated CSAM, there are disagreements on who is to be blamed—should the developers of the AI tool used to create the content be blamed or the users of the tool themselves? On one hand, the developers are responsible for creating these generative AI tools, without which the material cannot be created as easily. Additionally, developers who do not safeguard their tools and allow them to create such sensitive material seem to be at fault. Popular tools like Midjourney and DALL-E, for instance, send an error message if asked to create any such harmful image. On the other hand, the developers are not the ones creating the harmful images. They provide the technological capabilities, sure, but they are not the ones putting in the prompts and spreading the CSAM—the user should be the only one punished. Others might take a third approach, saying that the government itself is at fault for allowing such tools to be legally developed and that stricter safeguards should be put in place.

This debate mirrors a broader debate of weaponry and gun violence in society. When someone engages in harmful activity with a gun, who should be blamed? The shooter of the gun, the manufacturer, the government for allowing the gun to be purchased, and/or someone else? These issues have been debated in countries across the world for hundreds of years, and a similar debate takes place here amidst the potentially harmful force of generative AI models. So regardless of whether the material can be found, who can be legally blamed and who should be legally blamed remains a heated issue that adds an additional layer of complexity to the problem.

HOW CAN AI CSAM BE STOPPED?

It’s no overstatement that with regard to AI-generated CSAM, we find ourselves in a very tough and complicated situation with a lot of moving parts. The very tools that have the potential to be used for great good are also being used for evil. But the human spirit for innovation ought not be crushed, and smart minds are looking for solutions to mitigate this dark issue.

Some believe hard regulatory measures will fix the CSAM problem. However, as we said earlier, the cat is already out of the bag. Although big tech companies like Microsoft, OpenAI, Google, and Apple are happy to comply with such regulations, the small individual developers without much to lose still have access to the necessary technological infrastructure to train harmful models. As a result, cracking down on regulatory measures will do little to help mitigate this problem.

So does this mean we are doomed? Not quite. One effective approach to mitigating AI-generated CSAM is to fight AI with more AI. A tech company named Thorn is doing just this through its “Safer” CSAM classifier. As Thorn says, “The sheer volume of CSAM to be reviewed and assessed far outweighs the number of human moderators and hours in the day.” In this way, AI has a better chance of scanning for CSAM than the human eye. The Safer model, utilizing self-hosted deployment, is trained on massive amounts of CSAM tendencies and can detect, review, and automatically report it at a much faster, more accurate rate than humans. In 2022, Safer classified 304,466 images and 15,238 videos as potential CSAM. In fact, one recent classifier hit led to the discovery of 2,000 more images of CSAM. Once reported to the NCMEC, law enforcement conducted an investigation, and a child was rescued from active abuse. As Thorn said, “That’s the power of this life-changing technology.” Although there is still a lot of improvement to be done, this approach offers a bit of promise.

Although AI has the potential to improve workflows, manage finances, tell us stories, help us research, and much more, the dark side of AI is incredibly unnerving. As AI contributes to the proliferation of CSAM on the web, it is vital humans come together to embrace the good in AI and look for ways to mitigate the harmful effects it can have. In this way, the emergence of AI is a test of the human spirit.

Did you enjoy today's article?

Login or Subscribe to participate in polls.