• FryAI
  • Posts
  • OpenAI's "big release" is disappointing

OpenAI's "big release" is disappointing

Hello, AI enthusiasts! Hopefully you’re hungry, because we have prepared a smorgasbord of AI delicacies. 🍇

(The mystery link can lead to ANYTHING AI related. Tools, memes, articles, videos, and more…)

Today’s Menu

Appetizer: OpenAI unveils GPT-4o 🤖

Entrée: Is OpenAI bribing publishers? 😳

Dessert: ARM to start producing AI chips 🦾

🔨 AI TOOLS OF THE DAY

♟️ Notice: Master chess by learning from an AI. → check it out

🎶 Lyrical Labs: Write lyrics for songs with the help of AI. → check it out

📝 Whirr: Create forms as easily as you type. → check it out

OPENAI UNVEILS GPT-4o 🤖

Image: OpenAI

In the disappointing words of Sam Altman, “It’s not GPT-5 or a search engine, but a secret third thing.” 😳

What’s new? In a widely-anticipated live stream event, OpenAI released GPT-4o.

What can it do? GPT-4o is an updated model from OpenAI which focuses on image and voice recognition.

Image recognition:

The model is able to analyze documents, code, images, and more, providing answers to specific questions about such material. In a demo, the model was able to help one of the OpenAI researchers with a math problem and review their code. In another, the updated model was even able to identify emotions depicted in a selfie.

Voice recognition:

GPT-4o is able to respond to text, voice, or image queries in customizable voices. This voice can be tampered with to display emotions, such as excitement, anger, and sadness. In a demo, GPT-4o was also able to serve as a real-time translator for a conversation, as the model offers translation in over 50 languages. The voice feature also allows users to interrupt, making for more natural conversation.

What’s the significance? GPT-4o will roll out over the next few weeks and will be freely available to the public, both on desktop and in the form of an updated API. Along with this rollout, OpenAI is giving free users access to the GPT Store, where people will be able to build their own tools using GPT-4o and explore those others have made.

Do you want the truth? Given the hype surrounding this announcement by OpenAI and the rumors of an impending GPT-5 and GPT Search release, this announcement is disappointing. In many ways, the model’s demos seemed like a glorified Siri rather than “magic.” The voice sounded relatively robotic, the interruptions were awkward, and the model had to be corrected multiple times. The only good news from this release is the free public availability of an advanced model, but there was nothing shockingly new about this release. It might be the case that OpenAI had other plans for this event, but they had to make a late adjustment because GPT Search was not yet ready for release.

IS OPENAI BRIBING PUBLISHERS? 😳

I don't get why talking about money during job interviews is such a taboo … It would be much easier if they just accepted my bribe. 🙃

What’s happening? OpenAI is apparently bribing publishers, such as the Financial Times and Le Monde, to sign licensing deals with them so they can use their material to train AI models.

How do the bribes work? A leaked slide deck revealed that OpenAI has an initiative called the “Preferred Publishers Program” (PPP), which negotiates licensing deals on a per-publisher basis instead of making all the deals uniformly. According to the leaked deck, the PPP is offered to “high quality editorial partners” and promises “priority placement, richer brand expression, and more prominent links.” Publishers who qualify for PPP get an upfront payment as well as royalties dependent on how many ChatGPT users engage with their displayed content.

Why is this important? This information underlines how dizzying the copyright landscape currently is, amidst lawsuits and undercover bribes. It also worries many people, who are concerned that models like ChatGPT will begin displaying biased content based on ad spending rather than the most quality information. Others point out that this is precisely what Google has done for years.

ARM TO START PRODUCING AI CHIPS 🦾

ARM is breaking out the big guns to knock out big players like Nvidia. 💪

What’s up? SoftBank Group’s subsidiary, ARM Holdings, is delving into the realm of AI chips, with plans to unveil its first product by 2025.

What’s the roadmap? The UK-based company is currently establishing an AI chip division, targeting an AI chip prototype by spring 2025 and commencing mass production in the fall of the same year. Initial development expenses, potentially reaching hundreds of millions of dollars, will be borne primarily by ARM, with financial backing from SoftBank. This move has propelled ARM’s market value beyond $100 billion, indicating investor confidence in this decision.

Why is this important? ARM’s expansion into AI chips aligns with a broader trend in the data center market, where demand for customized chips to drive AI models is burgeoning. ARM’s presence in this space will reduce reliance on established players like Nvidia, especially once a mass-production infrastructure is firmly in place—a bottleneck for many AI chip manufacturers. Discussions are already underway between ARM and manufacturing giants like Taiwan Semiconductor Manufacturing Corp (TSMC) to ensure ample production capacity.

TWITTER (X) TUESDAY 🐦

HAS AI REACHED SINGULARITY? CHECK OUT THE FRY METER BELOW:

What do ya think of this latest newsletter?

Login or Subscribe to participate in polls.