• FryAI
  • Posts
  • Gemini bans election-related queries

Gemini bans election-related queries

Good morning. I hope you’re hungry, because our kitchen is overflowing with fresh fries and AI updates! 🍟

(The mystery link can lead to ANYTHING AI related. Tools, memes, articles, videos, and more…)

Today’s Menu

Appetizer: Gemini bans election-related queries 🇺🇸

Entrée: Midjourney bans Stability AI employees 🙅‍♂️

Dessert: U.S. government reports “extinction-level” AI threat 😳

🔨 AI TOOLS OF THE DAY

📸 Instanice AI: Change the aesthetic vibe of your photos.  → check it out

✍️ Strut: An AI-powered workspace for writers. → check it out

💞 Anxiety Simulator:  Practice talking friends through anxiety. → check it out

GEMINI BANS ELECTION-RELATED QUERIES 🇺🇸

Google is so scared of disseminating false information, they are making Gemini more and more impotent. 👎

What’s new? Google has implemented restrictions on election-related queries that users can ask its Gemini chatbot.

Why? Google already finds themselves in the AI hot seat over misinformation following historical inaccuracies and contentious responses, which led to the disabling of Gemini’s image generation feature. With elections looming worldwide, tech platforms are bracing for a surge in misinformation, particularly with the proliferation of AI-generated content. Google is doing everything they can to protect themselves from going viral (again) for all the wrong reasons. In the meantime, Gemini is becoming increasingly useless for gaining factual information.

MIDJOUNEY BANS STABILITY AI EMPLOYEES 🙅‍♂️

Q: Why was the bodybuilder banned from Walmart?

A: Shoplifting. 🏋️

What happened? Midjourney has banned Stability AI employees from using its service over illegal data scraping.

What was the incident? Midjourney has accused Stability AI of orchestrating a 24-hour system outage by engaging in what it terms as “botnet-like activity from paid accounts.” Midjourney alleges that Stability AI employees attempted to scrape its data in the middle of the night, causing a prolonged server outage and disrupting the generation of images for users. The incident was disclosed in a business update call on March 6th, where Midjourney explicitly linked the disruptive activity to Stability AI’s data team.

What is the response? Stability AI CEO Emad Mostaque denies any intentional wrongdoing and claimed he was unaware of the incident. Interestingly, the feud underscores broader criticisms within the generative AI community. Both companies have previously faced legal challenges for using masses of online data without consent, shedding light on the ethical concerns surrounding the industry.

U.S. GOVERNMENT REPORTS “EXTINCTION-LEVEL” AI THREAT 😳

According to a recent poll, over 80% of the American public believe AI could cause a catastrophic event and 77% believe the government should be doing more to regulate AI.

What’s up? According to a report commissioned by the U.S. State Department, the U.S. government must act “quickly and decisively” to avert substantial national security risks associated with AI which could, in the worst case, cause an “extinction-level threat to the human species.”

“Current frontier AI development poses urgent and growing risks to national security. The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.”

-U.S. State Department

What is the proposed action? The report proposes radical measures, such as making it illegal to train AI models above a specified computing power threshold, enforced by a new federal AI agency. It also recommends stringent controls on AI chip manufacturing and export, along with outlawing the publication of powerful AI model “weights” under open-source licenses. The authors argue that these measures are necessary to counter the risks of AI weaponization, loss of control, and the intense competitive dynamics in the industry.

Will this be adopted? Greg Allen, director of the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies (CSIS), does not see this proposal as a real possibility. He explains, “I think that this recommendation is extremely unlikely to be adopted by the United States government.” The U.S. government is focused more on transparency requirements and guardrails rather than outright bans on development. As Allen remarks, “Absent some kind of exogenous shock, I think they are quite unlikely to change that approach.”

“WATCH THIS” WEDNESDAY 👀

Our FryAI team sat down with Greg Kowal, founder of fabbler.ai, an AI tool that is set to revolutionize story-driven game creation. Check out the interview below:

HAS AI REACHED SINGULARITY? CHECK OUT THE FRY METER BELOW

The Singularity Meter rises 1.5%: US Government warns of existential AI threat

What do ya think of this latest newsletter?

Login or Subscribe to participate in polls.