- FryAI
- Posts
- Will ads ruin ChatGPT?
Will ads ruin ChatGPT?

Good morning, and happy Wednesday! Are you looking for a midweek boost? We’ve got you covered. 🚀
🤯 MYSTERY AI LINK 🤯
(The mystery link can lead to ANYTHING AI-related: tools, memes, articles, videos, and more…)
Today’s Menu
Appetizer: ChatGPT to be overtaken with ads? 🤦♀️
Entrée: Meta says AI election misinformation was <1% 🗳️
Dessert: Amazon’s new tool combats AI hallucinations 😵💫
🔨 AI TOOLS OF THE DAY
👩💻 Infografix: Easily create visually appealing infographics. → Check it out
📓 Mindsera: A journal that reflects back. → Check it out
✅ Parafact: Fact-check any human or AI-written text. → Check it out
CHATGPT TO BE OVERTAKEN WITH ADS? 🤦♀️
Q: Why was the cab driver so good at marketing?
A: He knew how to drive in traffic. 🚕
Why? AI services, like ChatGPT, are costly to run. Ads could help offset expenses, enabling these tools to remain free for most users. OpenAI CFO Sarah Friar noted the company is exploring this advertising option carefully, following in the footsteps of Microsoft, Perplexity, and startups like Adzedek, which already combine ads with chatbot responses.
What’s the risk? If ads are poorly implemented or subtly influence chatbot answers, trust in AI could erode. A chatbot prioritizing advertisers’ interests over users’ could feel exploitative, potentially alienating customers. Advertising may be a necessary revenue strategy, but AI makers must tread carefully to avoid turning helpful tools into intrusive sales pitches. This highlights the tension between sustaining free AI tools and ensuring user trust.
META SAYS AI ELECTION MISINFORMATION WAS <1% 🗳️
It looks like humans are the problem after all. 🙃
What happened? Meta has announced that AI-generated election disinformation did not materialize on its platforms this year, despite concerns that it would play a major role.
Why is this important? At the start of the year, concerns about generative AI spreading propaganda and disinformation in global elections were widespread. However, Meta claims these fears didn’t materialize significantly on its platforms, including Facebook, Instagram, and Threads. These results show that fears of AI spreading misinformation about sensitive topics may be overblown.
What did Meta find? Meta analyzed content from major elections worldwide, such as in the U.S., India, and Brazil. It reported that AI-generated election-related misinformation accounted for less than 1% of all fact-checked false information during key election periods. The company credits its existing policies and processes for detecting potentially harmful material and mitigating risks. For instance, Meta’s AI image generator rejected 590,000 requests for election-related deepfakes, including those involving prominent politicians like Joe Biden, Donald Trump, J.D. Vance, and Kamala Harris. It also disrupted 20 covert influence operations globally, targeting accounts based on behavior rather than content.
“With every major election, we want to make sure we are learning the right lessons and staying ahead of potential threats. Striking the balance between free expression and security is a constant and evolving challenge.”
AMAZON’S NEW TOOL COMBATS AI HALLUCINATIONS 😵💫
I have been hallucinating about French fries all day. 🍟
What’s new? AWS has introduced a new tool called “Automated Reasoning checks” to combat AI hallucinations.
How does it work? Automated Reasoning checks, available through AWS’ Bedrock service (specifically the Guardrails tool), works by allowing users to customize factual guardrails for AI models. Customers provide data to serve as a grounding truth which the tool uses to create and refine rules applied to the model. When the model generates a response, Automated Reasoning checks compares it against the data provided by the human user. If the response appears incorrect, the tool identifies the likely error and offers the correct answer based on the provided data. It also displays both the original and corrected answers, allowing customers to see how far the model deviated from the truth. This tool can be useful across multiple domains. For example, businesses can use this technology validate answers about HR policies or operational guidelines for consistency and accuracy. This builds on Amazon Bedrock Guardrails’ already supported safeguards like filtering sensitive content, redacting personal data, and contextual checks.
“WATCH THIS” WEDNESDAY 👀
AI is enhancing human creativity. This video breaks down the top design tools:
HAS AI REACHED SINGULARITY? CHECK OUT THE FRY METER BELOW:
What do ya think of this latest newsletter? |