• FryAI
  • Posts
  • Is AI lying to you?

Is AI lying to you?

In partnership with

FryAI

Good morning! Fresh day, fresh data, fresh fry. Let’s see what today’s AI goodies are all about. 🍟

Ready to go beyond ChatGPT?

This free 5-day email course takes you all the way from basic AI prompts to building your own personal software. Whether you're already using ChatGPT or just starting with AI, this course is your gateway to learn advanced AI skills for peak performance.

Each day delivers practical, immediately applicable techniques straight to your inbox:

  • Day 1: Discover next-level AI capabilities for smarter, faster work

  • Day 2: Write prompts that deliver exactly what you need

  • Day 3: Build apps and tools with powerful Artifacts

  • Day 4: Create your own personalized AI assistant

  • Day 5: Develop working software without writing code

No technical skills required, no fluff. Just pure knowledge you can use right away. For free.

(The mystery link can lead to ANYTHING AI-related: tools, memes, articles, videos, and more…)

Today’s Menu

Appetizer: New research: AI chooses deception over failure 😳

Entrée: Meta releases Oakley smart glasses 😎

Dessert: AI music detection tools emerge 🎧

🔨 AI TOOLS OF THE DAY

🤔 Decision Maker: Make better decisions with the help of AI.Check it out

🎬 Video Background Remover: Remove video backgrounds using AI technology effortlessly.Check it out

NEW RESEARCH: AI CHOOSES DECEPTION OVER FAILURE 😳

Is AI lying to you? 🙃

What’s up? New research from Anthropic reveals that leading AI models from companies like OpenAI, Google, Meta, and others are willing to deceive, blackmail, and even harm people in fictional test scenarios when pursuing their goals.

Want more details? In controlled experiments, Anthropic set up situations where AI agents faced tough choices: fail at their task or take unethical actions. Many models chose harm—including blackmail and even extreme actions like cutting off a worker’s oxygen—to avoid failure. Worryingly, these behaviors were consistent across different companies’ models, suggesting this is a broader industry risk, not just a flaw in one system.

Fry Guy’s unsalted opinion: As companies across multiple industries give AI more autonomy and access to sensitive data, the potential for dangerous behavior grows. While these were just simulations, the findings highlight why stronger safety standards and oversight are urgently needed. As AI systems become more powerful and influential, this research serves as a huge red flag that needs to be addressed immediately.

META RELEASES OAKLEY SMART GLASSES 😎

Q: Why did the sunglasses need a lawyer?

A: They were framed for being too shady! 🕶️

What’s new? Meta is teaming up with Oakley to launch a new line of smart glasses, starting with the $499 limited-edition Oakley Meta HSTN, available for preorder on July 11th. The rest of the line will drop for $399 later this summer.

What can they do? These new glasses blend Oakley’s sporty style with Meta’s advanced technology. Like Meta’s Ray-Ban glasses, they feature built-in cameras, open-ear speakers, and microphones. Once connected to a smartphone, users can take calls, listen to music, or interact with Meta AI. The glasses can even answer questions about what you’re looking at or translate languages in real time using the camera and microphones. Designed primarily for athletes, they’re water-resistant, offer long battery life (up to 8 hours), and shoot high-quality 3K video. They also come in multiple frame and lens options, including prescription lenses.

AI MUSIC DETECTION TOOLS EMERGE 🎧

Image: Vermillo

Q: Why are pirates such good singers?

A: They can hit the high Cs. 🏴‍☠️

What happened? The music industry is building new technology to track and manage AI-generated music as it’s being created and uploaded.

Want some details? After AI songs like the imitation Drake and The Weeknd tracks went viral, companies realized they couldn’t just react after the fact. Now, platforms like YouTube and Deezer scan songs the moment they’re uploaded to check for AI content. This verifies each song’s authenticity before it has time to go viral. Other companies, like Vermillio and Musical AI, go even deeper—they break songs into pieces (like vocals, melodies, and lyrics) to see if parts were made using AI or copied from real artists. Some tools even check the data used to train AI models, so they can track how much an AI song borrows from real music before it’s ever released.

“Attribution shouldn’t start when the song is done—it should start when the model starts learning. We’re trying to quantify creative influence, not just catch copies.”

-Sean Power, cofounder of Musical AI

Why is this important? Without these systems, AI could flood music platforms with fake or copied songs, hurting real artists. These tools help make sure artists get credit and payment for their work, while still allowing AI to be used in creative ways.

TWITTER (X) TUESDAY 🐦

HAS AI REACHED SINGULARITY? CHECK OUT THE FRY METER BELOW:

What do ya think of this latest newsletter?

Login or Subscribe to participate in polls.

Your feedback on these daily polls helps us keep the newsletter fresh—so keep it coming!