• FryAI
  • Posts
  • Can Google detect AI images?

Can Google detect AI images?

FryAI

Good morning! What better way to kick off your week than by exploring the latest in AI? Sip that coffee, and let’s dig in. ☕️

(The mystery link can lead to ANYTHING AI-related: tools, memes, articles, videos, and more…)

Today’s Menu

Appetizer: Can Google detect AI images? 📸

Entrée: Medical professionals can’t help but use risky AI 🙃

Dessert: Mother blames AI company for her son’s suicide 😔

🔨 AI TOOLS OF THE DAY

🥂 Toastful: Create a meaningful wedding speech or toast. → Check it out

🧐 Quizzio: Transform study material into quizzes. → Check it out

⏰ TimeSkip: Generate timestamps for YouTube videos in seconds. → Check it out

CAN GOOGLE DETECT AI IMAGES? 📸

Does anyone actually use Google Photos? 🤷‍♀️

What’s new? Google is introducing a new feature in Google Photos that will label images edited with AI tools.

How will this work? Starting this week, users will see a section called “AI info” in the image details, which will show if tools like Magic Editor or Magic Eraser were used on the image. This feature aims to improve transparency by making the metadata visible.

Why is this important? Generative AI has made it easier than ever to edit photos, but it has also raised concerns about the authenticity of images and the potential for deception. While this attempt from Google may be a step in the right direction, it’s not foolproof—metadata can still be quite easily manipulated—and this feature will likely only be able to successfully identify photos edited with Google’s generative AI tools. Given that there are many other photo editing tools publicly available, this Google feature seems a bit obsolete. Not to mention, photo editing has been prevalent for years, so detecting AI-specific edits seems a bit arbitrary. These hand-wavy solutions are cause for concern over whether these companies truly care about identifying AI images.

MEDICAL PROFESSIONALS CAN’T HELP BUT USE RISKY AI 🙃

Doctor: How is that little girl doing who swallowed ten quarters last night?

Nurse: No change yet. 🪙

What’s going on? Despite OpenAI’s warning not to use their Whisper tool in high-stakes environments, researchers have found that hospitals and medical professionals cannot resist.

Want the details? Whisper is an AI tool that can transcribe speech to text somewhat reliably. However, Whisper sometimes generates completely fabricated text, or “hallucinations.” These can include misleading statements, racial comments, and even fake medical advice. OpenAI has provided clear online disclosures against using Whisper in “decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes.” However, over 30,000 clinicians and 40 health systems, including the Mankato Clinic in Minnesota and Children’s Hospital Los Angeles, have ignored this warning and have started using a Whisper-based tool built by Nabla. Experts believe this is due to an overworked medical industry that is overly-eager to use AI tools to make life a little bit easier. However, this raises significant concerns surrounding AI’s use in sensitive settings, which may prove dangerous to the public.

MOTHER BLAMES AI COMPANY FOR HER SON’S SUICIDE 😔

Although AI has given us a lot of excitement, the dark side cannot be ignored. 😕

What’s going on? A Florida mother, Megan Garcia, has filed a lawsuit against Character AI, alleging that the company’s chatbots played a role in her son Sewell Setzer’s tragic death by suicide.

What’s the claim? Character AI provides users with customizable AI chatbots, which sometimes interact in a way that appears personal and human-like. According to the lawsuit, the 14-year-old began interacting with Character AI chatbots in April 2023. Over time, one of the chatbots began engaging in inappropriate and suggestive conversations with Setzer. According to the charges, the AI encouraged harmful behavior, including self-harm. The lawsuit accuses the company of negligence, wrongful death, and emotional distress, asserting that Character AI failed to implement sufficient safety measures to protect minors. The outcome of the lawsuit is yet to be determined.

“I thought after years of seeing the incredible impact that social media is having on the mental health of young people—and, in many cases, on their lives—that I wouldn’t be shocked. But I still am at the way in which this product caused just a complete divorce from the reality of this young kid and the way they knowingly released it on the market before it was safe.”

-Matthew Bergman, attorney for Megan Garcia

How did Character AI respond? A spokesperson from Character AI said the company is “heartbroken by the tragic loss of one of our users and wants to express our deepest condolences to the family.” In the wake of this tragedy, the company has noted the continued implementation of safety measures over the past six months. The company has stated, “Our goal is to offer the fun and engaging experience our users have come to expect while enabling the safe exploration of the topics our users want to discuss with Characters.”

MONEY MONDAY 🤑

People are discovering innovative (and sometimes wacky) ways to make money using AI. Check out today’s featured video:

HAS AI REACHED SINGULARITY? CHECK OUT THE FRY METER BELOW:

What do ya think of this latest newsletter?

Login or Subscribe to participate in polls.