• FryAI
  • Posts
  • AI is getting emotional...

AI is getting emotional...

FryAI

Greetings, AI lovers! Our kitchen has been steaming all night as we cook up the hottest AI updates. 👨‍🍳

(The mystery link can lead to ANYTHING AI-related: tools, memes, articles, videos, and more…)

Today’s Menu

Appetizer: OpenAI’s new AI voice rival 🗣️

Entrée: Sunak is gone: Is safe AI gone with him? 😳

Dessert: An AI system that reads minds 🤯

🔨 AI TOOLS OF THE DAY

🔊 Vocal Remover: Separate voice from music in any song. → check it out

🏠 Renovate AI: An AI-powered interior and exterior designer for your home. → check it out

🤣 Formshare: Generate witty jokes in seconds. → check it out

OPENAI’S NEW AI VOICE RIVAL 🗣️

I lost my voice today, and I cannot tell you how frustrating that is. 🫠

What’s new? Kyutai, a nonprofit French tech startup, has introduced Moshi, a new AI voice assistant that can depict emotions.

What can Moshi do? Moshi is capable of emulating 70 different emotions and can sing, whisper, and speak with accents. Moshi is also able to speak and listen at the same time.

Where is it available? Users can try a limited version of Moshi right now on Moshi.chat, and Kyutai plans to open-source the research and model within the next few weeks.

What’s the significance? The release of Moshi comes shortly after OpenAI delayed the release of Voice Assistant, a feature that is said to be able to interpret facial expressions, replicate human speech patterns, and engage in near real-time conversations. Moshi was developed by eight researchers in only six months. In this way, the release of Moshi is a huge win for the small guy.

SUNAK IS GONE: IS SAFE AI GONE WITH HIM? 😳

“Safety first” could be a thing of the past. 😬

What happened? Rishi Sunak has been ousted as the UK Prime Minister. This might have a massive impact on “safe” AI development and regulations across the globe.

Why does this matter? Regardless of Sunak’s political approaches and policies, he has made a major push for safe AI development over the past two years. One of the most important things Sunak did was hold the first ever AI Safety Summit. The Summit brought together policymakers, researchers, and tech leaders from around the world. The event marked a significant step forward with the signing of the Bletchley Declaration by 28 countries, including the United States and China, agreeing to cooperate on evaluating the risks of AI. In addition to the Safety Summit, Sunak continually made pushes for governments to work with tech leaders to come up with informed regulatory approaches for AI rather than rushing into regulations. He remarked, “How can we write laws that make sense for something we don’t yet fully understand?” His proactive approach to involve tech leaders from around the world in the push for safe AI development was laudable in the tech community, not only in the UK but across the world. But now that he is gone, many are concerned that this effort to incorporate the input of tech leaders in regulatory decision making will be gone with him. This might lead many uninformed government leaders across the globe to make regulatory decisions about a technology they know little about.

“[AI] will bring a transformation as far reaching as the industrial revolution, the coming of electricity, or the birth of the internet … In a worst-case scenario, society could lose all control over AI, preventing it from being switched off.”

-Rishi Sunak, former UK Prime Minister

AN AI SYSTEM THAT READS MINDS 🤯

Image: NewScientist

I am reading your mind, and I’m glad you love FryAI! We love you too. 😄

What’s up? Researchers at Radboud University in the Netherlands have created an AI system capable of generating accurate reconstructions of seen images based on brain activity.

How did the study work? The team utilized fMRI scans (MRI scans that measure changes in blood flow to show which parts of the brain are active when performing certain tasks) for humans and direct electrode recordings from macaque monkeys to record brain activity while viewing images. Over time, the AI system learned which parts of the brain reflected which visual inputs. This allowed the system to produce images reflecting what the individual or primate was viewing.

“As far as I know, these are the closest, most accurate reconstructions.”

-Umut Güçlü, researcher at Radboud University

What’s the significance? This research places Radboud University’s team among several global pioneers using AI to decode visual experiences from brain data. Despite limitations, such as reliance on pre-existing image datasets, this advancement holds immense potential for applications ranging from aiding stroke victims in communication to interpreting dreams.

TWITTER (X) TUESDAY 🐦

HAS AI REACHED SINGULARITY? CHECK OUT THE FRY METER BELOW:

What do ya think of this latest newsletter?

Login or Subscribe to participate in polls.