- FryAI
- Posts
- Meta Debuts Your Virtual Twin
Meta Debuts Your Virtual Twin
Bonjour! Today’s AI tidbits are crispier than the perfect French fry, so grab your favorite dipping sauce and let’s get munching. 🍟
🤯 MYSTERY AI LINK 🤯
(The mystery link can lead to ANYTHING AI-related: tools, memes, articles, videos, and more…)
Today’s Menu
Appetizer: Meta rolls out AI Studio 🤯
Entrée: Meta introduces new segmentation model 🎥
Dessert: Apple chooses Google chips over Nvidia 👀
🔨 AI TOOLS OF THE DAY
🏀 Hooper: Track your basketball stats and create highlights with ease. → Check it out
💬 ZixFlow: Next generation CRM over email, SMS, and WhatsApp. → Check it out
♻️ Reactor: An eco-friendly chatbot. → Check it out
META ROLLS OUT AI STUDIO 🤯
Q: How do social media influencers get paid?
A: Per DM. 😆
What’s new? Meta has unveiled AI Studio, a cutting-edge AI tool enabling social media users to create, share, and design personalized AI chatbots to engage with their followers.
What’s the point? By leveraging Llama 3.1, which was released last week, this tool allows users to craft customized AI characters or avatars which Instagram users can utilize as extensions of themselves to manage common DM questions and story replies. These chatbots work across multiple Meta social platforms, including Instagram, Messenger, WhatsApp, and web.
“Instagram creators can set up an AI as an extension of themselves that can quickly answer common DM questions and story replies. Whether it’s sharing facts about themselves or linking to their favorite brands and past videos, creator AIs can help creators reach more people and fans get responses faster. It’ll be almost like this artistic artifact that creators create that people can kind of interact with in different ways.”
What about privacy? According to Meta, creators can customize their AI based on things like their Instagram content, topics to avoid, and links they want to share. Creators can also turn auto-replies on and off and even decide who their AI replies to and doesn’t. Responses from these AI characters are clearly labeled, so there’s full transparency for friends and fans.
META INTRODUCES NEW SEGMENTATION MODEL 🎥
Introducing Meta Segment Anything Model 2 (SAM 2) — the first unified model for real-time, promptable object segmentation in images & videos.
SAM 2 is available today under Apache 2.0 so that anyone can use it to build their own experiences
Details ➡️ go.fb.me/p749s5
— AI at Meta (@AIatMeta)
10:48 PM • Jul 29, 2024
We aren’t done talking about Meta yet … 🤪
What’s new? Meta has introduced the Segment Anything Model 2 (SAM 2), the first unified model capable of identifying which pixels belong to a target object in an image or video and following them in real time.
What does it do? Segmentation, the process of identifying which image pixels belong to an object, is crucial for tasks like scientific image analysis and photo editing. The original SAM inspired AI-enabled image editing tools like Backdrop and Cutouts on Instagram and catalyzed diverse applications in science, medicine, and numerous other industries. For instance, SAM has been used in marine science to segment sonar images of coral reefs, analyze satellite imagery for disaster relief, and segment cellular images to aid in detecting skin cancer. SAM 2 extends these capabilities to video, addressing challenges such as fast object movement, appearance changes, and occlusion. This will enable easier video editing and new mixed reality experiences. SAM 2 can also assist in faster annotation of visual data for training computer vision systems, including those used in autonomous vehicles, and enable creative ways of interacting with objects in real-time or live videos.
APPLE CHOOSES GOOGLE CHIPS OVER NVIDIA 👀
Tostitos or Lays? 🧐
What’s up? Apple revealed in a technical paper that its AI models, the core of Apple Intelligence, were pre-trained on Google-designed Tensor Processing Units (TPUs) in the cloud.
What does this mean? This move by Apple suggests Big Tech is exploring alternatives to Nvidia’s GPUs for AI training. Nvidia’s GPUs have dominated the AI training market for the past two years but are costly and in high demand. Companies like Apple, OpenAI, Microsoft, Meta, and Tesla have relied heavily on Nvidia’s technology. However, many have been leaning less and less on the chip giant. Google has created their own chips primarily for internal use and OpenAI is looking to create chips of their own as well. Apple’s decision to utilize Google’s TPUs once again highlights this significant shift in the AI infrastructure landscape, indicating potential diversification in AI training hardware preferences among leading tech companies. Expect this pattern to continue going forward.
“WATCH THIS” WEDNESDAY 👀
This concise video outlines OpenAI’s new SearchGPT tool:
HAS AI REACHED SINGULARITY? CHECK OUT THE FRY METER BELOW:
What do ya think of this latest newsletter? |