• FryAI
  • Posts
  • This "smart toilet" might be watching you...

This "smart toilet" might be watching you...

In partnership with

FryAI

NEW DAY! Thanks for kicking it off with us. We cooked up the freshest stories so you can stay golden. 🍟

(The mystery link can lead to ANYTHING AI-related: tools, memes, articles, videos, and more…)

The Smartest Free Crypto Event You’ll Join This Year

Curious about crypto but still feeling stuck scrolling endless threads? People who get in early aren’t just lucky—they understand the why, when, and how of crypto.

Join our free 3‑day virtual summit and meet the crypto experts who can help you build out your portfolio. You’ll walk away with smart, actionable insights from analysts, developers, and seasoned crypto investors who’ve created fortunes using smart strategies and deep research.

No hype. No FOMO. Just the clear steps you need to move from intrigued to informed about crypto.

Today’s Menu

Appetizer: Kohler’s smart toilet is spying on people 👀

Entrée: Google drops “Deep Think” for Ultra users 🧠

Dessert: Anthropic Interviewer turns AI into insights 💡

🔨 AI TOOLS OF THE DAY

🗣️ Nexorify: Use your voice to set reminders. → Check it out

🤪 StickerBox: Turn ideas into physical stickers. → Check it out

KOHLER’S SMART TOILET IS SPYING ON PEOPLE 👀

What’s up? Kohler is facing criticism for falsely claiming that its $599 Dekoda smart toilet camera system uses end-to-end encryption to protect sensitive bathroom data.

Want the details? Earlier this year, Kohler released the Dekoda, a $599 device that snaps photos inside your toilet bowl and analyzes them to give gut-health insights. Anticipating privacy concerns, Kohler claimed all data was protected with “end-to-end encryption.” But security researcher Simon Fondrie-Teitler found that Kohler actually uses questionable encryption methods. True end-to-end encryption means only the sender and recipient can access data, as seen in apps like Signal or iMessage. Kohler’s system doesn’t meet that standard. Instead, it uses basic TLS encryption and encryption at rest, which protect data in transit but still allow Kohler to decrypt and process it on their servers. The company’s privacy policy also allows bathroom images and biometric data to be used to train AI models and to be shared with third parties.

Why should you care? Clear language is crucial when sensitive data is involved. Misusing terms like “end-to-end encryption” can give people a false sense of privacy, especially when a device is photographing something as sensitive as their bodily waste. In the age of AI, users should be extremely cautious about how their data might be stored, analyzed, or used to train AI systems.

A Framework for Smarter Voice AI Decisions

Deploying Voice AI doesn’t have to rely on guesswork.

This guide introduces the BELL Framework — a structured approach used by enterprises to reduce risk, validate logic, optimize latency, and ensure reliable performance across every call flow.

Learn how a lifecycle approach helps teams deploy faster, improve accuracy, and maintain predictable operations at scale.

GEMINI DROPS “DEEP THINK” FOR ULTRA USERS 🧠

What’s new: Google just added a new “Deep Think” mode to the Gemini app for its Ultra subscribers.

How it works: Deep Think is designed to help the AI handle harder problems—things like tricky math questions, scientific puzzles, and logic challenges. It does this by thinking through several possible answers at the same time instead of following just one path. That makes it much better at reasoning and solving problems than earlier versions. In fact, it scored unusually high on difficult testing benchmarks, scoring 41% on Humanity’s Last Exam and an unprecedented 45.1% on the ARC-AGI-2 test when using code execution. Ultra subscribers can try Gemini 3 Deep Think mode now by selecting “Deep Think” in the prompt bar and Gemini 3 Pro in the model dropdown.

Why does this matter? If you’re a student, professional, or anyone who deals with complicated questions, this update means Gemini can give clearer explanations and more reliable help. It brings us one step closer to having an everyday tool that can reason through problems the way a skilled human might.

ANTHROPIC INTERVIEWER TURNS AI INTO INSIGHTS 💡

Image: Anthropic

What’s up? Anthropic is launching a new tool called Anthropic Interviewer that runs large-scale interviews with people about how they use AI, starting with a pop-up invitation inside Claude.ai.

How does this work? The system plans an interview, holds a 10–15 minute adaptive conversation with each participant, and then helps researchers analyze the transcripts. This lets Anthropic gather insights at a scale traditional interviews can’t match—capturing not just what people do with AI inside the chat window, but how they use it afterward, how they feel about it, and what they want next. It’s a new way to bring real human perspectives directly into model development.

What were the results? In its first 1,250 interviews, the tool found that most workers feel AI boosts productivity, though many still worry about job displacement or social stigma. Creatives embraced the efficiency gains but felt anxious about economic pressures and losing creative control. Scientists were eager for stronger AI research partners but didn’t yet trust AI for core scientific reasoning. Overall, the findings reveal a workforce using AI widely but negotiating optimism, caution, and shifting expectations about the future.

HAVE YOU ENTERED OUR RAFFLE YET?

We are giving away META RAY-BANS … and huge discounts on our new AI community (coming soon)! To enter, all we ask is that you fill out this short, two-question survey:

🥇 Meta Ray Bans ($379 value) + First community month free.

🥈 50% off community membership for six months.

🥉 50% off community membership for three months.

HAS AI REACHED SINGULARITY? CHECK OUT THE FRY METER BELOW:

What do ya think of this latest newsletter?

Login or Subscribe to participate in polls.

Your feedback on these daily polls helps us keep the newsletter fresh—so keep it coming!