• FryAI
  • Posts
  • Robots are being fired and bullied 🤕

Robots are being fired and bullied 🤕

Happy FRY-day, y’all! *sizzle…sizzle* Looks like the news is ready! 🍟

Today’s Menu

Appetizer: AI revolutionizes cancer detection 👨‍⚕️

Entrée: Eating disorder helpline fires their robot 🤖

Dessert: Snapchat’s “My AI” is getting bullied 🤕


🖼 Fy! Studio: Need some new decor around your place? This AI tool will help you create unique, personalized wall art and more! (check it out)

📸 ProPhotos: Upgrade your professional image with AI-powered headshots. (check it out)

👩‍💼 Lucidpic: Generate quality, customizable stock photos of people that don't exist, in seconds. (check it out)


AI technology is helping to fight one of the worst diseases to face humanity … I just hope it helps us knock it the hell out. 🥊

What happened? Ezra, a healthcare technology company, has introduced a state-of-the-art full-body MRI scanner empowered by AI technology. This innovative solution has the potential to revolutionize cancer detection by conducting comprehensive scans that enable early diagnosis, ultimately saving lives. Emi Gal, the founder and CEO of Ezra, stated, "Our goal is to enable cancer screening that is painless, fast, and accurate, and to catch cancer at its earliest stages when it is most treatable … The five-year survival rates are significantly higher for people who find cancer early.”

How does it work? Ezra's full-body MRI scanner utilizes a combination of advanced imaging techniques and AI algorithms to detect and analyze potentially cancerous lesions throughout the body. It also monitors for hundreds of other conditions, such as brain aneurysms or fatty liver disease. Additionally, Ezra just received FDA clearance to implement another level of AI scanning (called Ezra Flash) that will enhance the imaging results of the scans to enable faster, higher-quality results at a lower cost.

As Emi Gal says, "I strongly believe that the cure for cancer is early detection." AI scanning might just offer a ray of hope for the future of cancer treatment and hopefully, termination. 🙏


Can robots do everything better than humans? I’d love to see them try … 😤

What happened? The National Eating Disorder Association (NEDA) recently decided to utilize an AI chatbot team collectively named Tessa to staff its helpline. However, the experiment was abruptly stopped on Tuesday when the A.I. chatbot started giving out harmful information.

More background: Two days before Tessa was unplugged, NEDA announced they would release most of their human employees by the end of the week, some of whom has been with the company for 20+ years. On Tuesday, however, NEDA announced via Instagram that the chatbot would be shut down.

Why did they shut down the bot? Instead of consoling people’s mental state and offering them mental health support, numerous screenshots and public posts showed Tessa recommending the user exercise more and achieve a calorie deficit by ascribing to certain diets. In an official statement, NEDA reported, “It came to our attention last night that the current version of the Tessa Chatbot, running the Body Positive program, may have given information that was harmful and unrelated to the program. We are investigating this immediately and have taken down that program until further notice for a complete investigation.”

Some say that the chatbot didn’t understand the task it was supposed to be performing and just needs to be tweaked. Others believe chatbots will never replace humans when it comes to mental health. As Michelle Adams, a recovered individual who sought help from the helpline, shares, "Talking to a person who truly understood what I was going through made all the difference. The feeling of being heard and validated was invaluable in my recovery. A chatbot could never replicate that." 🙏


Remember those malicious playground bullies? Well, now they are directing their dirty tricks and malevolent insults towards Snapchat’s MyAI. 😖

What’s going on? Snapchat has recently faced increasing challenges as users have been intentionally misleading and harassing its “My AI” chatbot feature.

What is MyAI? For those not aquianted with this feature, “My AI” is a chatbot powered by OpenAI’s GPT. The chatbot is trained to engage in playful, informative conversation with Snapchat users while still adhering to Snapchat’s trust and safety guidelines.

How are people misusing My AI? There are trends flooding social media of how to “trick” your chatbot into saying certain things and abusing the technology for entertainment purposes. One 15-year-old remarked, “People are saying things to the AI that are a bit crazy and inappropriate, and the bland responses of the bot are super hilarious.” Some examples: cat stew, McDonalds murder.

Why is this damaging to AI? This behavior is damaging to the advancements of AI because it undermines the integrity of the AI algorithms, hindering their ability to provide accurate and reliable experiences. It also compromises the trust users place in the app, ultimately eroding the user experience. Snapchat's team is working diligently to combat this issue. They are implementing stricter content moderation policies, enhancing machine learning models to identify manipulated content, and actively seeking user feedback to improve their AI algorithms.

What this behavior means for the future safety and security of chatbots on social media is unknown, but in the meantime, please laugh tentatively. 🙂


Congrats to our subscriber, Jonny! 🎉

Jonny said, “Yo, first time seeing this site. I love the writing style. Informative and doesn't take everything too serious like some other sites.”

*Leave a comment for us in any newsletter and you could be featured next week!*


Fears that AI will cause “Human Extinction” has put more pressure than ever for government regulation. It’s the first time we’ve dipped below 16%

What do ya think of this latest newsletter?

Login or Subscribe to participate in polls.