- FryAI
- Posts
- Your first look at GPT-4.1
Your first look at GPT-4.1

Good morning! Today’s stories have been prepared with extra care … and seasoning—lots and lots of seasoning. 🧂
Get Over $6K of Notion Free with Unlimited AI
Running a startup is complex. That's why thousands of startups trust Notion as their connected workspace for managing projects, tracking fundraising, and team collaboration
Apply now to get up to 6 months of Notion with unlimited AI free ($6,000+ value) to build and scale your company with one tool.
🤯 MYSTERY AI LINK 🤯
(The mystery link can lead to ANYTHING AI-related: tools, memes, articles, videos, and more…)
Today’s Menu
Appetizer: OpenAI releases GPT-4.1 🤖
Entrée: Google uses AI to talk to dolphins 🐬
Dessert: AI device helps blind people navigate the world 👁️
🔨 AI FINANCE TOOLS OF THE DAY
📧 Klik: Turn designs into clickable email signatures instantly. → Check it out
👤 HeyGen: Create an AI spokesperson for marketing videos. → Check it out
📲 Publer: Schedule your social media posts. → Check it out
OPENAI RELEASES GPT-4.1 🤖
A new day = a new AI model. 🦾
What’s new? OpenAI has launched GPT-4.1, offering better performance, increased memory capacity, and lower costs than GPT-4o.
Want the details? GPT-4.1 can handle up to one million tokens of context—meaning it can analyze and respond to far more text, images, or video than previous models. It outperforms GPT-4o in coding and instruction-following, and it comes with two smaller siblings: GPT-4.1 Mini (affordable) and GPT-4.1 Nano (ultra-fast and lightweight). OpenAI claims GPT‑4.1 is also more accurate at identifying relevant information and is less distracted by outliers in the training data. Moreover, it is 26% cheaper than GPT-4o, making it a more efficient option for developers and businesses. This release signals a shift in OpenAI’s model strategy—favoring smarter, cheaper, and more scalable AI.
GOOGLE USES AI TO TALK TO DOLPHINS 🐬
Q: What did the dolphin say when it got confused?
A: Can you be more Pacific? 🌊
What’s up? Google has worked with researchers at Georgia Tech and the Wild Dolphin Project to create DolphinGemma, an AI model trained to analyze and generate dolphin vocalizations.
How does it work? DolphinGemma is a lightweight, audio-focused AI model that uses decades of labeled dolphin sound data from the Wild Dolphin Project, the longest-running underwater dolphin study in the world. The model learns the structure of dolphin whistles, clicks, and burst-pulses—key elements of their communication—and can generate new, dolphin-like sound sequences. DolphinGemma helps researchers detect recurring vocal patterns and predict sequences, much like a language model does with human speech. This work complements ongoing two-way communication experiments using CHAT, a system that encourages dolphins to mimic synthetic whistles associated with objects.
Why is this significant? This breakthrough could reshape how we study and interact with intelligent marine animals. By identifying patterns in dolphin speech, AI may unlock a deeper understanding of dolphin society and even lay the groundwork for interspecies communication.
“We’re beginning to understand the patterns within the sounds, paving the way for a future where the gap between human and dolphin communication might just get a little smaller.”
Over 1 million Americans are blind. 🫶
What’s new? Researchers have developed a wearable AI-powered system that helps visually impaired individuals navigate their surroundings more effectively than a traditional white cane.
How does it work? The system uses a camera mounted on a pair of glasses to capture the environment in real time. An AI program processes these live images and identifies obstacles, people, doors, and other objects. It then guides the wearer through subtle audio cues delivered every 250 milliseconds and through vibrating patches worn on the wrists and fingers. These patches alert users to nearby obstacles or when to grasp objects. In trials, participants navigated indoor mazes 25% more efficiently than with a cane, and real-world tests on city streets showed similarly promising results. The current design is still a prototype, but researchers are working to make the device smaller, lighter, and potentially as discreet as a contact lens.
Why is this important? This technology could greatly increase independence and mobility for people with visual impairments, especially in busy urban environments where traditional canes may fall short.
“WATCH THIS” WEDNESDAY 👀
Our FryAI team has created an EASY-TO-FOLLOW course for ABSOLUTE BEGINNERS hoping to learn how to use AI. This course focuses on how to create images with AI.
HAS AI REACHED SINGULARITY? CHECK OUT THE FRY METER BELOW:
What do ya think of this latest newsletter? |
Your feedback on these daily polls helps us improve the newsletter—so keep it coming!