• FryAI
  • Posts
  • Snapchat filters get AI upgrade

Snapchat filters get AI upgrade

FryAI

Good morning! If you're hungry for AI knowledge, we’ve got your golden, crunchy fix right here. 🍟

(The mystery link can lead to ANYTHING AI-related: tools, memes, articles, videos, and more…)

Today’s Menu

Appetizer: Snapchat filters get AI upgrade 👻

Entrée: Meta releases AI audio watermarking tool 🔊

Dessert: Researchers use AI to predict anxiety 😌

🔨 AI TOOLS OF THE DAY

🦾 Obviously AI: Turn raw data into industry-leading predictive models in minutes. → check it out

🏄‍♂️ RideAI: Get insightful data to help you find the best surf spots in any location. → check it out

🚙 Road Trip Navigator: An easy, quick, and fun way to plan your next road trip. → check it out

SNAPCHAT FILTERS GET AI UPGRADE 👻

If a girl asks you to “streak” on Snapchat, don’t start taking your clothes off and running around ... 😆

What’s up? In an effort to leverage AI and stay ahead of their competitors, Snapchat is bringing updated augmented reality (AR) to their platform. This includes massive AI upgrades to Lens Studio, a place for developers and laypeople alike to create 3D AR filters and experiences.

“What’s fun for us is that these tools both stretch the creative space in which people can work, but they’re also easy to use, so newcomers can build something unique very quickly.”

-Bobby Murphy, Snap's chief technology officer

What’s new? With this new update, people will be able to create AI-powered lenses with ease for public use. The updated Lens Studio has a new suite of generative AI tools, such as a conversational AI assistant and a tool that will allow artists to generate 3D images from a simple prompt, which they can use for their AR lenses. This erases the need to develop a 3D model from scratch, significantly reducing the time it takes to develop filters and making it easier for non-developers to express their creativity.

How will this improve filters? Until now, Snapchat’s AR has only been capable of simple effects, like creating mustache filters or putting a funny hat on someone’s head. According to a Reuters report, “Snap's advancements will now allow AR developers to create more realistic lenses, such as having the hat move seamlessly along with a person’s head and match the lighting in the video.” Looking ahead, Snap envisions expanding AR capabilities beyond facial enhancements to encompass full-body experiences, like generating virtual outfits.

META RELEASES AI AUDIO WATERMARKING TOOL 🔊

According to AI, Mark Zuckerberg is not a robot. But hey, I wouldn’t rat out my family either. 🤖

What’s new? Meta has introduced a novel system called AudioSeal, designed to embed hidden signals or watermarks in AI-generated audio clips.

How does it work? Available for free on GitHub, this innovative tool can identify AI-generated segments within lengthy audio recordings, such as hour-long podcasts, aiding in the detection of AI-created content online. AudioSeal embeds a watermark throughout each section of the entire audio track, which allows the watermark to be “localized.” This means it can still be detected even if the audio is cropped or edited. In testing, the system proved impressive detection accuracy, achieving 90-100% success in identifying watermarked audio. As a result, Meta thinks AudioSeal could help mitigate the rise of misinformation and scams involving voice cloning.

Will this actually work? Despite the capabilities of AudioSeal, watermarks like this remain very easy to tamper with. During testing, Meta researchers tried different attacks to remove the watermarks and found that the more information is disclosed about the watermarking algorithm, the more vulnerable it is. According to a report from MIT Technology Review, the system also requires people to voluntarily add the watermark to their audio files. As a result, it’s unlikely this approach will actually work.

“I’m skeptical that any watermark will be robust to adversarial stripping and forgery.”

-Claire Leibowicz, head of AI and media integrity at Partnership on AI

RESEARCHERS USE AI TO PREDICT ANXIETY 😌

I tend to overthink things … but then again, do I really? 🤔

What’s the scoop? Researchers from the University of Cincinnati developed an AI system called “Comp Cog AI” that can predict anxiety levels with up to 81% accuracy.

How does this work? The study involved 3,476 participants whose demographics reflected those of the U.S. population. The participants were shown 48 pictures with mildly emotional content and were asked to rate them positively or negatively. This task aimed to quantify their judgment patterns based on their emotional responses to the images. Alongside the picture ratings, participants answered a few demographic and contextual questions, such as their age and feelings of loneliness. These variables provided additional context to their emotional responses. The collected data, including the picture ratings and contextual variables, were fed into machine learning algorithms. These algorithms were then designed to identify patterns and correlations that could predict the participants’ anxiety levels.

Why is this important? Anxiety disorders are extremely prevalent and often time-consuming to diagnose. By using AI to detect patterns in thinking, it might allow for those struggling with anxiety to get quicker, more effective treatment.

TASTE-TEST THURSDAY 🍽️

Do you think the AI stock market boom is a bubble that will pop?

(Leave a comment explaining your answer and we might feature it tomorrow with the results)

Login or Subscribe to participate in polls.

HAS AI REACHED SINGULARITY? CHECK OUT THE FRY METER BELOW:

What do ya think of this latest newsletter?

Login or Subscribe to participate in polls.