• FryAI
  • Posts
  • Can AI make us more moral?

Can AI make us more moral?

In partnership with

FryAI

Good morning! You know what goes great with Saturday brunch? … Some AI crunch! 😆

(The mystery link can lead to ANYTHING AI-related: tools, memes, articles, videos, and more…)

Build smarter, not harder: meet Lindy

If ChatGPT could actually do the work, not just talk about it, you'd have Lindy.

Just describe what you need in plain English. Lindy builds the agent and gets it done—no coding, no complexity.

Tell Lindy to:

  • Create a booking platform for your business

  • Handle inbound leads and follow-ups

  • Send weekly performance recaps to your team

From sales and support to ops, Lindy's AI employees run 24/7 so you can focus on growth, not grunt work.

Save hours. Automate tasks. Scale your business.

Today’s Menu

Appetizer: Can AI make us more moral? 🙏

Entrée: Google is ruining headlines 🙃

Dessert: Google launches Workplace Studio 🦾

🔨 AI TOOLS OF THE DAY

💡 InsightTube: Get instant insights from YouTube. → Check it out

📚 PipeRead: Your personal AI librarian → Check it out

CAN AI MAKE US MORE MORAL? 🙏 

Image: Springer (AI and Ethics)

What’s up? Our very own FryAI co-founder and PhD candidate Hunter Kallay just published a paper called, “How AI can make us more moral: capturing and applying common sense morality.” The paper takes a novel approach to AI systems, considering how they may actually enhance our moral lives.

What’s it about? This paper explores a different side of AI—using it not just to make machines behave ethically, but to help us become more ethical. Instead of trying to program morality into AI, the essay argues that advanced machine-learning systems can actually deepen our understanding of human moral psychology itself. By collecting and analyzing people’s intuitive responses to moral dilemmas, AI could help us uncover the shared “common sense” moral principles that quietly guide our judgments—principles that philosophers have long struggled to pin down because of widespread moral disagreement. The paper imagines a gamified system for training a “collective moral conscience model”: an AI that learns from millions of human judgments and distills the deep patterns underneath them. Such a model could clarify the moral assumptions that shape philosophical theorizing, help reduce the noise of moral disagreement, and even offer practical guidance—helping both AI systems and human beings navigate difficult choices and reflect more honestly on our own biases.

Why should you care? This paper flips the script on what we think of when we hear “ethics of AI.” Perhaps AI ethics isn’t just about how we apply these systems in society. Maybe there are ways we can use AI to enhance our own moral lives, and this paper serves as a cornerstone for that exploration.

 Want to read the full paper but don’t have access? Email [email protected] and ask for a copy!

GOOGLE IS RUINING HEADLINES 🙃

What’s going on? Google is testing an AI tool in Google Discover—its personalized news feed—that replaces original news headlines with short, often misleading versions.

Want more details? The experimental tool automatically rewrites publishers’ headlines into ultra-condensed summaries that frequently misrepresent what the story is actually about—turning normal articles into strange or inaccurate phrases which often do not reflect the real reporting. Because these AI-generated titles appear in place of the originals, publishers lose control over how their own stories are presented to readers. Google says this is only a “small UI experiment,” meaning it may change or disappear depending on user feedback.

Why does this matter? Headlines shape how people understand the news, and replacing them with misleading AI versions risks confusing readers, harming publishers’ reputations, and weakening trust in journalism.

74% of Companies Are Seeing ROI from AI.

Incomplete data wastes time and stalls ROI. Bright Data connects your AI to real-time public web data so you launch faster, make confident decisions, and achieve real business growth.

GOOGLE LAUNCHES WORKPLACE STUDIO 🦾

What’s new? Google has officially launched Google Workspace Studio, a new tool that lets anyone create AI agents inside Workspace without writing code.

How does it work? Workspace Studio uses the power of Gemini 3 to let employees design custom AI agents that automate everyday tasks, from sorting emails to managing complex workflows. Instead of relying on rigid rules or technical scripts, these agents can understand context, reason through problems, and adapt on the fly. Companies are already using virtual teams of these agents to brainstorm ideas, check technical feasibility, draft user flows, and prepare full user stories, cutting their planning time from hours to just minutes.

Why is this significant? This shift puts advanced automation directly into the hands of everyday workers, not just programmers. Anyone can now streamline repetitive tasks, boost productivity, and focus on higher-value work. For businesses, it means faster decision-making, smoother operations, and smarter collaboration across the tools people already use daily.

HAVE YOU ENTERED OUR RAFFLE YET?

We are giving away META RAY-BANS … and huge discounts on our new AI community (coming soon)! To enter, all we ask is that you fill out this short, two-question survey:

🥇 Meta Ray Bans ($379 value) + First community month free.

🥈 50% off community membership for six months.

🥉 50% off community membership for three months.

HAS AI REACHED SINGULARITY? CHECK OUT THE FRY METER BELOW:

What do ya think of this latest newsletter?

Login or Subscribe to participate in polls.

Your feedback on these daily polls helps us keep the newsletter fresh—so keep it coming!