• FryAI
  • Posts
  • Gemini to power Apple’s iPhones?

Gemini to power Apple’s iPhones?

Good morning! I hope you haven’t eaten breakfast yet, because we are serving up heaping portions of AI news. 🍽️

(The mystery link can lead to ANYTHING AI related. Tools, memes, articles, videos, and more…)

Today’s Menu

Appetizer: Gemini to power Apple’s iPhones? 📱

Entrée: xAI open-sources Grok code 🤖

Dessert: YouTube’s new AI labeling requirements 🎥


🍭 CandyCall: Send clever AI phone calls for any occasion. → check it out

👨‍💻 Octomind: Find bugs on your website before others do. → check it out

♟️ Chesski: Your personal AI chess tutor. → check it out


Your smartphone is about to get smarter. 🤓

What’s up? Apple is in discussions with Google to integrate the Gemini AI engine into future iPhone models. Specifics regarding terms and implementation remain undecided, but these negotiations aim to license Gemini for upcoming iPhone software features.

What is the significance? On the Google side, this collaboration could significantly extend Google’s AI services to over two billion active Apple devices, potentially closing the gap for AI dominance with Microsoft-backed OpenAI. For Apple, this move could allow for the infusion of advanced AI models into its devices, meaning iPhone users could have AI access at their fingertips in ways they never have before. Despite these positives, such a partnership might draw regulatory scrutiny towards Apple, particularly considering Google’s ongoing antitrust battles after their Gemini image-generation debacle. Nevertheless, analysts view this collaboration as a strategic move to fortify Apple’s AI capabilities while expanding Google’s reach in the smartphone market.

“This strategic partnership is a missing piece in the Apple AI strategy and combines forces with Google for Gemini to power some of the AI features Apple is bringing to market.”

-Daniel Ives, analyst at Wedbush


There are 10 kinds of people in the world: those who understand binary and those who don’t. 👾

What happened? Elon Musk’s xAI has officially open-sourced the base code of the Grok AI model on GitHub.

What’s the significance? Previously available as a chatbot exclusively for Premium+ users of the X social network, Grok-1 is licensed under Apache License 2.0 and allows for commercial applications. xAI said that the open source model wasn’t tuned for any particular application, such as using it for conversations. Rather, the company noted that Grok-1 was trained on a “custom” stack without specifying details. Various companies are considering incorporating Grok into their AI-powered tools. Perplexity CEO Aravind Srinivas, for instance, announced plans to fine-tune Grok for conversational search, signaling potential widespread adoption. This move also aligns with Musk’s push for transparent AI development, highlighted by his recent lawsuit against OpenAI.


Q: What do you call a satisfied YouTuber?

A: A “content” creator. 😌

What’s new? Starting Monday, YouTube creators will face new transparency requirements. This reflects the effort of the platform to accurately label and distinguish AI-generated content.

What are the new requirements? Creators must now flag videos featuring AI-generated or manipulated content, prompting YouTube to attach appropriate disclosure labels. These labels will notify viewers when content contains significantly altered or synthetic elements, particularly crucial for sensitive topics like politics. Creators failing to comply with the disclosure mandate may face penalties, including content removal or suspension from monetization programs.

How will it work? When a YouTube creator uploads a new video, they will be prompted to fill out a simple report, indicating any use of AI-generated content. From here, YouTube will automatically add a label in the description noting that the given video contains “altered or synthetic content” and that the “sound or visuals were significantly edited or digitally generated.” For videos on “sensitive” topics such as politics, the label will be displayed more prominently on the video screen.

What’s the significance? This labeling initiative aims to curb confusion and misinformation among users encountering increasingly realistic AI-generated videos. The move responds to concerns from both online safety experts and the public regarding the potential for AI-generated content to mislead people, especially ahead of significant events like the 2024 general elections.



The Singularity Meter Rises 0.7%: Scammers are using AI to file your taxes without you knowing it

What do ya think of this latest newsletter?

Login or Subscribe to participate in polls.