• FryAI
  • Posts
  • A Disturbing Twist: Washington Lottery's Unforgivable AI Image Blunder

A Disturbing Twist: Washington Lottery's Unforgivable AI Image Blunder

Welcome to this week’s Deep-fried Dive with Fry Guy! In these long-form articles, Fry Guy conducts an in-depth analysis of a cutting-edge artificial intelligence (AI) development or developer. Today, Fry Guy is exploring a Washington Lottery blunder that used AI to create an X-rated image of a woman. We hope you enjoy!

*Notice: We do not gain any monetary compensation from the people and projects we feature in the Sunday Deep-fried Dives with Fry Guy. We explore these projects and developers solely for the purpose of revealing to you interesting and cutting-edge AI projects, developers, and uses.*


🤯 MYSTERY LINK 🤯

(The mystery link can lead to ANYTHING AI related. Tools, memes, and more…)

What do you do when you think you won a lottery vacation, but instead you are presented with an AI-rendered, X-rated picture of yourself?

The State of Washington’s Lottery System used AI to generate a topless picture of an unassuming mother who was playing one of their new mobile games. Let’s explore how this happened, and where this leaves us in a world littered with deepfakes.

A LOTTERY WINNER OR AN UNFORGIVABLE BLUNDER?

The Washington Lottery System recently created and launched a new mobile game called, “Test Drive A Win.” The purpose of the game was to have players virtually throw a dart at a dartboard. But unlike a typical dartboard that’s filled red, black, and cream colored regions, this lottery dartboard was filled with photos of dream vacation spots, such as Hawaii, Bora Bora, and San Juan. If a player hit, say, Bora Bora with their virtual dart, the contestant could then upload a picture of themselves to the app. The app would consequently, via AI, superimpose the player’s face and likeliness onto that vacation spot in the form of a deepfake picture, showing the player what it would look like if they were at that destination. This game was meant to serve as a marketing strategy for the lottery system, helping players visualize themselves in exotic vacation spots via deepfake technology. Presumably, this was done to get players to spend more money on lottery tickets.

Despite the fun intent of this marketing strategy, it all came crashing down in the most disturbing way. Megan, a 50-year-old mother from Olympia, Washington, found herself trying out this new game, but it took a dark twist. Megan threw the virtual dart in the app, and it landed on a “swim with the sharks” dream vacation option. Megan says she used the in-site option to take a photo of her face to upload, but instead of it showing a picture of her on a tropical beach in the Caribbean, “Test Drive A Win” placed her on a bed, sitting in a sexual position, and dressed in nothing but bikini bottoms. The background of the image showed that the bedroom was underwater, with fish swimming around her. The ironic thing about the image was that the Washington Lottery’s watermark logo was placed on bottom right-hand side of the image. There was no way the lottery system could deny their creation—they quite literally gave it their “stamp” of approval.

Megan was left fuming, exclaiming, “Our tax dollars are paying for that! I was completely shocked. It’s disturbing to say the least.” She added, “I also think whoever was responsible for it should be fired.” Megan is correct that tax dollars did, indeed, pay for the app. The app wasn’t developed by some private company; rather, the app was created by the Washington State Government itself through its lottery system. This is the same Government that has railed against deepfake nudes in the past, and which is currently trying to make explicit deepfakes punishable by jail through Bill HB1999. So, that begs the question, since the Washington Lottery is a government agency, should it, along with those responsible for for Megan’s nude picture, put themselves behind bars?

Despite the governing authorities failing to meet their own harsh demands on deepfakes, the Washington Lottery refuses to hold themselves accountable for this mistake. When the Washington Lottery was questioned on Megan’s AI mistake, they deflected away the responsibility, saying, “We were made aware that a single user of the AI platform was purportedly provided an image that did not adhere to the built-in parameters set by the developers. Prior to launch, we agreed to a comprehensive set of rules to govern image creation, including that people in images be fully clothed.” Instead of taking responsibility for their mistake, they brushed it off by pointing at the developers of the app. So at the end of the day, it doesn’t look like anyone is going to take responsibility for this AI deepfake nude mishap unless Megan decides to file a lawsuit, which has not yet been determined.

The story doesn’t end there. Not only did the Washington Lottery point the finger at the app developers (which they hired, by the way), but the lottery system claims there were safeguards in place for the app to abide by when creating pictures. This seems incredibly inconsistent with what is known about current AI-powered image generators and safeguards. It’s almost impossible for anyone to create a nude picture with text-to-image tools like DALL-E and Midjourney right now, as they will always flag keywords and deny suggestive requests. There’s a realistic chance that DALL-E or Midjourney would have been viable options for the Lottery System’s app, as they are two of the biggest AI image creation tools on the market. If they had used these tools, however, it seems such a mistake would not have been possible. So how this app was able to make Megan topless remains a mystery. If they used an alternative tool, it would beg the question why, and also confirm that the safeguards (if there truly were any) were not implemented correctly.

Despite this massive blunder and the underlying mystery about how it happened, the good news is that the Washington Lottery ended up shutting down the app “out of an abundance of caution” and are currently looking to fix this embarrassing mistake.

WHO GETS THE BLAME WHEN AI GOES ROGUE?

Unfortunately, the Washington Lottery blunder is not the only of its kind. Many similar mistakes have taken place in the past few years, even within large tech companies. For instance, Meta’s Facebook sticker generator put weapons in the hands of children’s cartoon characters. More recently, a Microsoft engineer publicly accused the company’s Copilot image-generation tool of randomly creating violent and sexual imagery, even after the team was warned of the issue.

Beyond large company blunders, the rise of deepfake pornography has become a major area of concern. This industry has been gaining so much traction that one website dedicated to sexualized deepfakes (usually created and shared without consent) receives around 17 million hits each month. This has caused many organizations to push for suitable punishments, such as fines and imprisonment.

Despite the push for justice in a deepfake world, many believe it is not viable or practical. Have we ventured too far down a rabbit hole from which we cannot escape? Many liken generative AI’s ability to generate deepfakes to the creation of the atomic bomb. Now that it is here, we cannot reverse the innovation. People already have their hands on the necessary code and transformative technology, so taking it away is no longer an option. Instead, we must try to find ways to mitigate risks and punish those culpable.

In a perfectly just world, those who create harmful deepfakes would face some sort of punishment, but exacting punishments for such behavior is not so easy. For one thing, it’s not easy to tell what is a deepfake and what is not, so detecting—let alone proving—an image to be a deepfake is very difficult. Not to mention, the proliferation of these deepfakes is so expansive that tracking the original source of the image and punishing all creators of them would be a monumental and never-ending task.

Even if the origin of all deepfake images could be tracked, a further debate ensues over who ultimately deserves the blame. Do the developers of the tool deserve the blame for others who use it irresponsibly, like the Washington Lottery thinks? This would be an easy, traceable target—just blame the big tech companies! But in this case, one could say the developers of knives, for example, could be responsible for all stabbings—and that just doesn’t seem to get it right. We have to remember that image creation tools are just that: tools. Regardless of the safeguards put in place, the underlying model can still be used for malicious purposes. So maybe it really is the user of the tool who deserves the blame when image creation goes wrong. But this, again, requires that we are able to track who created the images … and we are back to square one.

So are we in too deep with deepfakes? Undoubtedly, as of now, we are in a difficult spot. But that doesn’t mean we should give up on finding creative solutions to the problem and trusting in the power of humanity to overcome the dark spots in this revolutionary technology.

Did you enjoy today's article?

Login or Subscribe to participate in polls.