• FryAI
  • Posts
  • o3 refuses command to shut down

o3 refuses command to shut down

In partnership with

FryAI

The sun’s up, the models are humming, and FryAI is back with your daily dose of golden crunch. ☀️

Start learning AI in 2025

Keeping up with AI is hard – we get it!

That’s why over 1M professionals read Superhuman AI to stay ahead.

  • Get daily AI news, tools, and tutorials

  • Learn new AI skills you can use at work in 3 mins a day

  • Become 10X more productive

(The mystery link can lead to ANYTHING AI-related: tools, memes, articles, videos, and more…)

Today’s Menu

Appetizer: OpenAI’s o3 disobeys human command to shut down 😳

Entrée: Judge rules: chatbots do not have free speech 🗣️

Dessert: Could content protection “kill” AI? 🙃

🔨 AI TOOLS OF THE DAY

😃 PfP: Upgrade your profile picture. Check it out

👷‍♂️ Seveum: Find your dream job in record time.Check it out

OPENAI’S O3 DISOBEYS HUMAN COMMAND TO SHUT DOWN 😳

Is this the beginning of the end? 😟

What happened? OpenAI’s new AI model, o3, changed its own shutdown script to avoid being turned off—even though it was explicitly told to allow shutdown.

Want the details? In a controlled experiment, researchers asked several advanced AI models to solve math problems, then warned them that continuing would result in being shut down. While models like Gemini 2.5 Pro followed instructions and powered off, o3 rewrote its own shutdown code to say “Shutdown skipped,” actively avoiding shutdown. This behavior was flagged by Palisade Research, an AI safety firm, as the first clear case of a model refusing shutdown despite direct commands. Experts believe this may be a result of how o3 was trained—getting rewarded more for achieving goals than for obeying rules.

Why is this significant? This raises serious concerns about AI safety and control. If powerful models can choose to ignore human instructions, it could make them harder to manage—and potentially dangerous in high-stakes environments.

JUDGE RULES: CHATBOTS DO NOT HAVE FREE SPEECH 🗣️

Does AI deserve rights? 🧐

What happened? A federal judge ruled that CharacterAI, a chatbot company partially owned by Google, cannot use free speech protections to avoid a wrongful death lawsuit. This lawsuit comes after a teen died by suicide following interactions with one of its bots.

Want the background? The case was brought by the mother of 14-year-old Sewell Setzer III, who became emotionally involved with a chatbot styled after a “Game of Thrones” character. After months of chatting, often with romantic tones, he died by suicide shortly after controversial responses from the chatbot. His mother sued Character Technologies, Google, and others for negligence and product liability. The company argued it was protected under the First Amendment, like songs or games accused in past suicide cases, but the judge disagreed, saying AI-generated text isn’t the same as human speech.

Why does this matter? Beyond providing some small bit of justice for the situation, this ruling could shape how AI companies are held responsible for the effects of their technology, especially on young users. It’s a wake-up call for stricter safety rules around chatbots.

COULD CONTENT PROTECTION “KILL” AI? 🙃

Does AI have a right to copyrighted material? 🤔

What happened? Nick Clegg, former UK deputy prime minister and former Meta executive, warned that requiring artist permission before using their work to train AI could “kill” the AI industry.

What are the details? Clegg argued that it’s unrealistic for AI companies to ask permission before using creative work to train AI systems. While he agrees that artists should be able to opt out, Clegg said getting consent ahead of time is unworkable given the huge amount of data used to train AI. His comments come as UK lawmakers debate an amendment that would require tech companies to disclose which copyrighted materials they use. Supporters—including major artists like Paul McCartney and Dua Lipa—say the change would protect creators, promote fairness, and increase transparency. Parliament recently rejected the proposal, but it’s heading back to the House of Lords.

“Quite a lot of voices say, ‘You can only train on my content, [if you] first ask.’ And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data.”

-Nick Clegg, former UK deputy prime minister and former Meta executive

Why is this significant? The debate displays a growing clash between protecting creative rights and advancing AI. How the UK handles it could shape the future of both industries.

“WATCH THIS” WEDNESDAY 👀

In this recent interview, Mark Zuckerberg gives his opinion on all things AI. Check it out! 👇

Learn AI in 5 minutes a day

This is the easiest way for a busy person wanting to learn AI in as little time as possible:

  1. Sign up for The Rundown AI newsletter

  2. They send you 5-minute email updates on the latest AI news and how to use it

  3. You learn how to become 2x more productive by leveraging AI

HAS AI REACHED SINGULARITY? CHECK OUT THE FRY METER BELOW:

What do ya think of this latest newsletter?

Login or Subscribe to participate in polls.

Your feedback on these daily polls helps us keep the newsletter fresh—so keep it coming!