- FryAI
- Posts
- Gemini gets emotional
Gemini gets emotional

Welcome to a new week—where we toast the trends, crisp the chaos, and dish out AI news worth biting into. 🍟
Find out why 1M+ professionals read Superhuman AI daily.
In 2 years you will be working for AI
Or an AI will be working for you
Here's how you can future-proof yourself:
Join the Superhuman AI newsletter – read by 1M+ people at top companies
Master AI tools, tutorials, and news in just 3 minutes a day
Become 10X more productive using AI
Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.
🤯 MYSTERY AI LINK 🤯
(The mystery link can lead to ANYTHING AI-related: tools, memes, articles, videos, and more…)
Today’s Menu
Appetizer: Gemini gets emotional 💙
Entrée: Reddit files lawsuit against Anthropic for data scraping 😳
Dessert: Bombing suspect used AI to plan attack 😔
🔨 AI TOOLS OF THE DAY
🎙️ Cleanvoice: Remove background noise, filler words, and pauses from your voice recordings. → Check it out
🖼️ Portrait Generator: Turn your selfies into artistic portraits in different styles. → Check it out
GEMINI GETS EMOTIONAL 💙
A “googol” is 10 to the 100th power, which is 1 followed by 100 zeros. 😃
What’s new? Google’s Gemini 2.5 is now capable of real-time audio conversations, multilingual dialogue, and emotionally expressive speech—making talking to AI feel more natural than ever.
How does it work? Gemini 2.5 was designed from the start to understand and create not just text, but also audio, images, video, and code. Its latest audio features allow it to hold realistic, low-latency conversations in over 24 languages, adjusting tone, accent, and emotion on the fly. It can ignore background noise, understand visual content, and even shift between speakers in generated dialogues. Developers can use this new Gemini feature to power audio experiences like podcasts, storytelling, or customer support bots, with voice styles controllable through simple text prompts.
“The evolution of text-to-speech technology is moving rapidly, and with our latest models, we're moving beyond naturalness to giving unprecedented control over generated audio. Now you can generate anything from short snippets to long-form narratives, precisely dictating style, tone, emotional expression and performance—all steerable through natural language prompts.”
Why should you care? This leap in AI speech tech means more human-like and accessible interactions with digital assistants, educational tools, and media. It’s a big step toward AI that doesn’t just understand you—it speaks your language, mood, and intent.
REDDIT FILES LAWSUIT AGAINST ANTHROPIC FOR DATA SCRAPING 😳
What’s the difference between a lawyer and a liar? The pronunciation. 😅
What happened? Reddit has filed a lawsuit against AI company Anthropic, accusing it of illegally using Reddit users’ posts to train its chatbot without permission.
Want the details? According to the lawsuit, Anthropic used automated bots to collect content from Reddit and then fed that content into its AI system, Claude. Reddit says this includes personal user data and that Anthropic never asked for consent. While Reddit has licensing deals with companies like Google and OpenAI worth hundreds of millions of dollars, no such agreement has been made with Anthropic.
BOMBING SUSPECT USED AI TO PLAN ATTACK 😔
Nothing funny here … 👎
What happened? Federal authorities say two men used a generative AI chatbot to help plan and build a bomb that exploded at a fertility clinic in Palm Springs, California, injuring four people.
How did this happen? According to investigators, the main suspect—who died in the blast—relied on the AI to help assemble a powerful car bomb using chemicals supplied by an accomplice who was arrested in New York. He used the AI chat program to search for detailed information on explosives, including chemical combinations and detonation velocity. The chatbot was not named.
“Three days before [the accomplice] arrived at [the suspect]’s house, records from an AI chat application show that [he] researched how to make powerful explosions using ammonium nitrate and fuel.”
Why does this matter? This case marks the second time this year that AI-assisted bomb-making has been linked to a violent attack. The first time was in January, when a person used ChatGPT to build a car bomb in a Las Vegas attack. These incidents highlight the dark side of rapidly advancing AI tools. While chatbots like ChatGPT and Claude offer many benefits, they can also be misused to seek dangerous information. These companies must do better to flag potentially harmful queries.
MONEY MONDAY 🤑
People are discovering innovative (and sometimes wacky) ways to make money using AI. Check out today’s featured video:
HAS AI REACHED SINGULARITY? CHECK OUT THE FRY METER BELOW:
What do ya think of this latest newsletter? |
Your feedback on these daily polls helps us keep the newsletter fresh—so keep it coming!