- FryAI
- Posts
- Meta releases Llama 3.1
Meta releases Llama 3.1

Good morning! Looking for some munchie AI updates to snack on? We’ve got them for you. 🍴
🤯 MYSTERY AI LINK 🤯
(The mystery link can lead to ANYTHING AI-related: tools, memes, articles, videos, and more…)
Today’s Menu
Appetizer: Meta releases Llama 3.1 🤖
Entrée: Grok 3.0 to release by end of year 🦾
Dessert: Sam Altman receives angry letter from U.S. Senate 📫
🔨 AI TOOLS OF THE DAY
🧠 SuperMemory: Your second brain, in the form of AI. → Check it out
🎬 Cuebric: AI filmmaking from concept to camera. → Check it out
🎥 Vozo: Rewrite, redub, and lip-sync viral videos with prompts. → Check it out
META RELEASES LLAMA 3.1 🤖
AI officially has its own DALL-E Llama. 😀
What is Llama 3.1? Llama 3.1 boasts a staggering 405 billion parameters and was trained using over 16,000 Nvidia H100 GPUs. Meta claims it outperforms GPT-4o and Anthropic’s Claude 3.5 Sonnet on several benchmarks. The company has also emphasized the customizability of Llama 3.1, allowing developers to tailor the model to their needs, train on new datasets, and perform additional fine-tuning without sharing data with Meta. Despite a hefty investment in developing the model, Meta is sharing Llama 3.1 freely, aiming to drive the AI industry towards open-source development.
Why open source? Meta believes an open-source approach will democratize access to AI, enhancing productivity, creativity, and economic growth. CEO Mark Zuckerberg stated, “I believe that open source is necessary for a positive AI future. AI has more potential than any other modern technology to increase human productivity, creativity, and quality of life – and to accelerate economic growth while unlocking progress in medical and scientific research. Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society.”
GROK 3.0 TO RELEASE BY END OF YEAR 🦾
"Grok 3.0 will be the most powerful A.I. in the world and we're hoping to release it by December".
一 Elon Musk
— DogeDesigner (@cb_doge)
7:25 PM • Jul 22, 2024
Love him or hate him, Elon Musk is driving AI at an unparalleled pace. 🏎️
What’s new? Elon Musk announced the Memphis Supercluster, “the most powerful training cluster in the world,” which will be used to train Grok 3.0, planned to be released at the end of this year.
“Supercluster: a large, highly interconnected network of computing resources specifically designed to support artificial intelligence (AI) workloads.”
Want the details? Located in Memphis, Tennessee, the data center will consist of 100,000 liquid-cooled Nvidia H100 GPUs. According to a local news outlet, the supercluster will be located in the southwestern part of the city and “will be the largest, capital investment by a new-to-market company in the city’s history.” However, xAI does not yet have a contract in place with local utility Tennessee Valley Authority, which requires such to provide electricity to projects in excess of 100 megawatts. This may or may not end up being an issue for the project, but it is noteworthy regardless.
What’s the significance? The supercluster’s purpose is to produce and power the “world’s most powerful AI by every metric,” Grok 3.0. Since the release of ChatGPT, we have not seen any monumental leaps in AI development. Sure, there has been progress, but everyone is waiting for something to blow their socks off. Grok 3.0 might represent that leap.
SAM ALTMAN RECEIVES ANGRY LETTER FROM U.S. SENATE 📫
Q: Why didn’t Alexa run for Senate?
A: Because she likes being Speaker of the House. 🏛️
What’s up? A group of U.S. Senators have sent a letter to OpenAI CEO Sam Altman demanding answers to safety concerns.
Why? This letter follows reports of rushed safety testing for GPT-4o and a letter from OpenAI whistleblowers to the SEC about safety concerns and questionable internal procedures.
What does the letter say? The letter requests that OpenAI keeps its promise of committing 20% of its computing resources to AI safety research. The letter also demands that OpenAI offers its next foundational model to U.S. Government agencies for “deployment testing, review, analysis, and assessment.”
What’s the significance? This letter might be an indication of stricter government oversight on AI development going forward. If anything, it confirms that OpenAI is under an intense spotlight from all angles.
TASTE-TEST THURSDAY 🍽️
Do you think we should have more or less government regulations on AI?(Leave a comment explaining your answer, and we might feature it tomorrow with the results!) |
HAS AI REACHED SINGULARITY? CHECK OUT THE FRY METER BELOW:
What do ya think of this latest newsletter? |