• FryAI
  • Posts
  • Reading Your Own Obituary: How AI Is Faking Death And Spreading Fake News

Reading Your Own Obituary: How AI Is Faking Death And Spreading Fake News

Welcome to this week’s Deep-fried Dive with Fry Guy! In these long-form articles, Fry Guy conducts an in-depth analysis of a cutting-edge artificial intelligence (AI) development or developer. Today, Fry Guy dives into the nuances surrounding AI-written obituaries and the spread of fake news. We hope you enjoy!

*Notice: We do not gain any monetary compensation from the people and projects we feature in the Sunday Deep-fried Dives with Fry Guy. We explore these projects and developers solely for the purpose of revealing to you interesting and cutting-edge AI projects, developers, and uses.*


🤯 MYSTERY LINK 🤯

(The mystery link can lead to ANYTHING AI related. Tools, memes, and more…)

“Reading your own obituary is a surreal experience.”

These are the words of Deborah Vankin, who found that news of her death was spreading online, despite her being very much alive. She is one of many who have had this shocking experience.

How did this happen, and where does AI come into play? Let’s explore.

READING YOUR OWN OBITUARY

Like any normal morning, Vankin was gathering her things and heading out the door. Suddenly, her phone rang. It was her dad. “The first thing he said was, ‘please don’t be alarmed by what I’m about to send you or what I’m about to tell you, but there’s an obituary out there for you that I just read.” Her reaction, like anyone’s, was mixed with confusion, anxiety, and rage. She said, “I read the obit and nothing can really prepare you for that. I immediately felt sort of shocked. And I felt, physically, my heart race and was having a sort of anxious response when I was reading it. And then I felt sad. I actually felt a true sadness, reading this.”

Unfortunately, Vankin isn’t the only one who has had this experience. Brian Vastag was grieving the suicide of his partner, Beth Mazur. While he was on his way to visit the place of her death, he began to see obituaries surface, claiming that he had died along with her. One of the fake obituaries read, “The recent passing of Beth Mazur and Brian Vastag, both grappling with the challenges of Chronic Fatigue Syndrome, serves as a poignant reminder of the resilience of the human spirit in the face of adversity.” The fake obituaries pushed the real one further down in web searches, making it harder for Mazur’s vast network of friends to get the right information—news of their deaths caused confusion among many friends who thought the reports were real. Vastag was incredibly disturbed and upset. “I was dealing with the shock of losing somebody, and I was really upset that the obituary about me caused [additional] stress.” This incident led him to proclaim, “The internet has turned into a pile of nonsense.”

These stories are just a few amongst a sea of AI-generated obituaries proliferating the web. This raises various questions: where did these fake obituaries come from, why, and what can be done about it?

WHAT’S GOING ON BEHIND FAKE OBITS?

Behind these fake obituaries is a ploy for information and money. According to international journalist Lucia Stein, “Anonymous internet fraudsters use search engine optimization (SEO) to identify people looking up the name of someone who has recently died. They then create ‘obit’ stories about this stranger by scraping details of their life from social media and other websites.” These “obit pirates” hope to capitalize on search interest in a particular person, often including keywords in their stories so the content can be ranked highly on Google and therefore direct traffic to their site.

In the case of Vankin, she believes these fraudsters were tipped off to her name through a popular article she wrote about how nervous she is driving on LA’s chaotic freeways. For Vastag, he believes his work with Mazur to advocate for people with often-overlooked chronic illnesses may have garnered some attention. This goes to show how subtle the clues these fraudsters like to use for their fake stories truly are.

The goal of these obituaries is to garner maximum attention so the creators can sell ads on the story. Oftentimes, this can lead to significant revenue, especially if the story is interesting (or stirs controversy, for that matter) and starts to gain traction. Cybersecurity expert Mohi Ahmed explains the process as follows:

“Let’s say it’s 10 cents a click or whatever it might be, it’s a matter of scale and reach. So if you’re targeting someone in this obituary who is a celebrity that is very well known to the public, that’s going to get broader reach and therefore more clicks … If you’re then doing that for multiple different types of people, you can imagine that those clicks, and those small pennies on the page do start to add up over over time.”

The more interesting and controversial the story, the more clicks. The more clicks? The more money. The formula really is that simple, and it’s working. Some scammers claim to have made thousands of dollars in ad revenue off single articles, motivating the proliferation of such misleading material even more.

IS AI “FAKE NEWS” UNSTOPPABLE?

Unfortunately, in the world of generative AI, it is very easy to generate fake obituaries. It’s as easy as gathering the slightest bit of information about a person and asking a chatbot to put together an interesting story. Robert Wahl, an associate professor of computer science at Concordia University Wisconsin, stated, “There’s very low startup costs for this. You can use free services that are available on the internet. And you can generate this for little to no cost. And it can pay some revenue, so there’s an incentive to do it.” Much of these scams are initiated overseas, making legal punishment difficult. Wahl states, “It may or may not be illegal in all countries. So the challenging situation is trying to determine whether it’s illegal activity—even though it’s certainly done in poor taste. And so this is for the most part something we cannot avoid. We just have to learn to identify the hoaxes.”

The problem of AI-generated fake news goes beyond obituaries. With the rise of generative AI, creating engaging stories which captivate the public eye have become easier than ever, and they have the power to deceive many with little to no work or investment. For instance, just last year, a fake image of a bombing at the Pentagon was shared over social media and was even featured by reporters at CNN. This sent waves through the stock market, until it was confirmed that the story was a hoax. But this just goes to show how the problem highlighted by the rise of fake obituaries is more deeply rooted in an issue about content authenticity at scale.

So what can be done to address this problem? Google, the world’s most popular search engine, said it’s constantly updating its systems to restrict spam and combat evolving techniques. They recognize the aim of these types of fake articles, stating, “They are produced at scale with the primary intent of gaming search ranking, and offer little value to users.” This led the tech giant to implemented new AI-driven security updates to help identify “spammy, low-quality content” on its search results, ensuring users that the most quality content will be kept at the top of the search results. This targets the issue of popularity, where misleading articles—even if they become popular—do not rise to the top of search pages and easily reach the public eye. “With our recent updates to our search spam policies, we’ve significantly reduced the presence of obituary spam in search results,” a Google spokesperson said.

Google’s “spammy, low-quality” detection approach might work for a while, but as AI-generated content continues to increase in quality worldwide, measures like this will certainly struggle to keep up. Even with current models, it’s very difficult to tell what is spam and what isn’t. This was underlined at the most recent Met Gala, where AI images of celebrities such as Katy PerryRihanna, and Selena Gomez—who were not actually present at the event—went viral on X, receiving millions of views and thousands of likes. X has since flagged the images as AI-generated, but not before hundreds of thousands of viewers were tricked.

Beyond the attempts of Google, many companies are trying to step up to identify fake news and harmful deepfakes. For instance, Adobe has developed “Content Credentials,” a digital watermark, to identify how media was created and edited. This works quite seamlessly. When AI tools like OpenAI’s DALL-E generate an image, it receives a watermark that indicates its AI origin. Using Content Credentials, this marked content can be automatically labeled when uploaded to various social platforms, providing transparency to users. This sort of watermarking approach has been adopted by Meta’s social platforms, TikTok, and more. Whether these sort of methods will be effective remains to be seen, but it’s at least a start.

So whether it’s fake bombings, fake photos, or fake obituaries, the spread of deceptive AI-generated content highlights the growing need for media literacy and vigilance in discerning authentic content from AI-generated fake news. Such fake news is not new, but the ability of AI has made it more indistinguishable from authentic content, and so this deceptive content is becoming an inevitable part of our digital world. If people don’t become more skeptical of internet content or no effective way is found to easily identify such material, humanity will be left susceptible to dangers of all kinds.

Did you enjoy today's article?

Login or Subscribe to participate in polls.