• FryAI
  • Posts
  • Some things to know before you invest in AI ...

Some things to know before you invest in AI ...

It’s Thursday, and that means a “thick-cut” version of your favorite AI news! 🍟

🚨 A super special announcement: Be sure to check out our first episode of “Behind the Bots” where we interview the creators of Chirper.ai, a social media site … with no humans allowed! 🤖

Today’s Menu

Appetizer: AI powerhouse companies join forces 🦾

Entrée: Netflix lists $900,000 AI job amidst writer/actor strike 💰

Dessert: SEC cracks down on AI investments 🏛

🔨 AI TOOLS OF THE DAY

📦 ProductBot: Decide what to buy on Amazon and get recommendations based on your preferences. → check it out

💻 Namy AI: A simple tool to generate domain name ideas.→ check it out

📍 MapsGPT: Quickly find and explore interesting places nearby! → check it out

AI POWERHOUSE COMPANIES JOIN FORCES 🦾

There is an exclusive group, and to be included, you have to know the password! ... and no, it’s not “password.” 🤫

What’s new? Following the White House meeting last week, four prominent companies in AI—OpenAI, Anthropic, Microsoft, and Google—have jointly declared the establishment of an industry organization responsible for ensuring the secure advancement of cutting-edge models.

What is the purpose? The Frontier Model Forum will focus on “safe and responsible” development of frontier AI models, which are defined as “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks.” The president of Microsoft, Brad Smith, said, “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

More specifically, the main objectives of the forum are:

  • Promoting research in AI safety, such as developing standards for evaluating models.

  • Encouraging responsible deployment of advanced AI models.

  • Discussing trust and safety risks in AI with politicians and academics.

  • Helping develop positive uses for AI, such as combating the climate crisis and detecting cancer.

Is the group exclusive? Membership in the group is said to be open to organizations who develop frontier AI models.

NETFLIX LISTS $900,000 AI JOB AMIDST WRITER/ACTOR STRIKE 💰

“Put your money where your mouth is!” … Well, Netflix is doing just that. 🦾

What’s going on? Hollywood executives have stated that it’s “just not realistic” to pay actors and writers more, yet Netflix has released an offer of up to $900,000 for an AI product manager position and up to $650,000 for its generative AI technical director role.

Some background, please? 87% of actors make less than $26,000 yearly, and that number is decreasing due to the implementation of AI. This has caused the The Screen Actors Guild—American Federation of Television and Radio Artists and the Writers Guild of America to partake in a strike demanding, among other things, labor safeguards against AI.

What’s wrong with AI? Directors and producers are using AI in a variety of ways, including developing machine learning techniques from human writers and then using AI to write, giving no credit or compensation to the human author. Additionally, AI is being used to replicate the acting of individuals, so human actors need only be paid for one day, and then their content and persona can continue to be used without consent or compensation.

What does the massive salary mean? Actors and writers are losing their jobs and livelihoods to AI, and there is little they can do about it at the moment besides hold picket signs and hope lawsuits come through. Nonetheless, many in the space are incredibly upset over this offer. Rob Delaney, who had a lead role in the “Black Mirror,” said, “So $900k/yr per soldier in their godless AI army when that amount of earnings could qualify thirty-five actors and their families for SAG-AFTRA health insurance is just ghoulish … Having been poor and rich in this business, I can assure you there’s enough money to go around; it’s just about priorities.”

SEC CRACKS DOWN ON AI INVESTMENTS 🏛

AI stocks are the talk of the town … and the talk of Wall Street as well.

What’s up? The U.S. Securities and Exchange Commission (SEC) adopted new rules to address two important issues in the financial sector, amidst the volatile AI market. 📈

What are the new rules?

  1. The first rule requires publicly traded companies to disclose any incidents of hacking they experience if it is deemed serious enough to be material to investors. This move comes as cyber attacks have become increasingly frequent and costly, and the SEC aims to help investors deal with the impact of these attacks.

  2. The second rule concerns the use of AI by broker-dealers. In response to the events of the 2021 "meme stock" rally, where robo-advisers and brokers used AI and game-like features to influence trading, the SEC is proposing guidelines to address potential conflicts of interest arising from the use of predictive data analytics. The objective is to ensure that the broker's financial interest doesn't supersede that of their clients.

What does this mean? These regulatory changes demonstrate the SEC's commitment to strengthening the financial system's resilience against cyber threats and the potential risks posed by the use of AI to investors, allowing for more informed decisions. While some critics argue that existing requirements are sufficient and that the new rules might inadvertently reveal vulnerabilities to hackers, the SEC continues to take public feedback into account to refine the proposals further. But in the end, investors and financial firms will need to adapt to these new rules, which emphasize transparency, cybersecurity, and responsible AI use.

HAS AI REACHED SINGULARITY? CHECK OUT THE FRY METER BELOW

The Fry Meter climbs 1.1%. Large companies like Google and OpenAI promise the White House to follow certain self implied rules. I mean, come on! This is like a toddler defining their own rules for their parents to enforce.

What do ya think of this latest newsletter?

Login or Subscribe to participate in polls.