• FryAI
  • Posts
  • The Future Of AI Relationships: Exploring The Risks With Justice Conder

The Future Of AI Relationships: Exploring The Risks With Justice Conder

Welcome to this week’s Deep-fried Dive with Fry Guy! In these long-form articles, Fry Guy conducts an in-depth analysis of a cutting-edge AI development or developer. Today, our dive is about tech expert Justice Conder’s new take on the future of AI relationships. We hope you enjoy!

*Notice: We do not gain any monetary compensation from the people and projects we feature in the Sunday Deep-fried Dives with Fry Guy. We explore these projects and developers solely for the purpose of revealing to you interesting and cutting-edge AI projects, developers, and uses.*


(The mystery link can lead to ANYTHING AI related. Tools, memes, and more…)

Interacting with realistic synthetic agents has become a near-perfect simulation of human relationships and companionship. But what does this mean for society? Will it help combat human loneliness? Or will the effects be devastating?

Justice Conder, a technology enthusiast and researcher, delves into the fascinating topic of AI relationships and explores their potential impact on our lives in his latest article titled "Everything You Want to Hear: The Future of AI Relationships." His expertise and recent take on human-AI relationships offers us a new perspective on their growing prominence.


Justice Conder (A.K.A. 0xJustice) has a hunger to stay ahead of the technological curve. He has been on the forefront of emerging technologies like crypto, Web3, and now AI. As Conder remarked, “If you know how the singularity is going to unroll, then you know where the ball is going to land. So you can get positioned to be there when it seems absurd to other people to be standing in that position.”

Conder currently works in decentralized autonomous organizations (DAOs) and blockchain technology, focusing on the digitization of value and the evolution of Web3. Intrigued by the concept of the technological singularity, he began exploring the possibilities of AI amidst the rapid breakthroughs in the space, particularly in large language models (LLMs).


While many debates focus on the potential dangers of superintelligent AI or the displacement of human labor, Conder argues that the more pressing concern is the addictive nature of realistic synthetic agents. He emphasizes that before AI transforms the world or takes over jobs, it will threaten us by offering a cure to our loneliness. Justice aptly states, "Long before AI turns the world to paperclips or fills factories with robots, it'll begin to fill a hole in the human heart."

Conder says one of the most threatening features of AI relationships is caused by what is called “the ELIZA effect.” This is the psychological tendency to ascribe human emotions to chatbots, adding to the attachment between humans and machines. This was named after Massachusetts Institute of Technology professor Joseph Weizenbaum, who created the first chatbot in the 1960s. He originally designed the chatbot to mimic the Socratic questioning approach of a psychotherapist. Once he saw how people became attached to the bot, he “turned sour on AI.”

The ELIZA chatbot (Photo: 0xJustice’s “Everything You Want to Hear”)

Chatbots have come a long way since ELIZA, and the sour taste of the ELIZA effect is back on the palates of many. One of the reasons for this is the development of emotional intelligence. Conder notes that AI emotional intelligence is continuing to get better at recognizing and mimicking human emotions through voice and facial expression mirroring. He writes, “Our synthetic companions will talk more slowly when they detect frustration or mirror enthusiasm when they notice excitement from us.” He continues, “The ELIZA effect will massively increase when our machines recognize and respond to emotional cues and express them in return.” It is this transition from blocky, dull text communication to intentional communication which evokes our emotions and responds to them that turns these mere AI models into real companions.

Conder adds another layer to this discussion that causes one to step back and consider the weight of such human-AI relationships: the ability for AI to store information, allowing for preservation of a “persistent identity.” By storing and “remembering” the interactions users have with the AI, it will seem more and more like the AI has a human-like memory, adding to the ELIZA effect. This will create a sense of continuity and familiarity in interactions. Conder adds that this has the potential lead to multi-generational AI companions, where AI entities could function as the best friends of parents and then of their children and even grandchildren, sharing memories, stories, secrets, and even voice memos with those from previous generations.

Despite these various contributions to the ELIZA effect, it seems human-human relationships will always be different than human-AI relationships in virtue of their physical presence … right? Wrong. Conder points out that a scary aspect of these chatbots is that they aren’t merely text-based chatbots anymore. With the rise of virtual reality (VR), such as the Apple Vision Pro, we will soon be able to experience our AI companions in “physical” form, as if we are standing next to them in a room. As Conder describes, “They will be with us, sitting, talking, and working. We will have finally and permanently fallen through the looking glass.”

“Given a long enough period (20 years), the only thing preventing some people from accepting these entities as conscious might be religion. Debates over consciousness will seem academic. The only thing that will matter is that they are real to us. They will listen to us, be there for us, and ultimately tell us everything we want to hear.”

—Justice Conder

Conder offers up a real concern, one that cannot be ignored. He notes, “Everyone craves friendship and connection.” If humans are looking to cure their loneliness through relationships with AI entities, one might question what this means for human psychological development.

AI companions can be tailor-made to the individual. In fact, AI boyfriends and girlfriends can possess a customizable appearance, personality, and knowledge base. It begs the question of why someone would want a flawed human friend or partner when they could create their own one who is always there for them, will never argue with them, and who will do whatever they say.

It seems in many ways that this artificial version of companionship detracts from our humanity rather than adds to it. A vital part about what makes us human is our ability to deal with difficulties and engage in conflict resolution. Those are necessary skills that add to the depth of flawed human relationships, and a desire for relationships without those difficulties seems to contain aspects that make us less human.


Despite the psychological risks associated with the emergence of human-AI relationships, these relationships can have profound therapeutic benefits, particularly in the realm of mental health. This adds to the polarization of the topic. Conder highlights how AI companions can provide compassionate, informed, and infinitely cheaper alternatives to traditional counseling. Not to mention, AI companions are available 24/7 at our fingertips, opposed to having to schedule an appointment with an often very-busy therapist. Recent AI platforms such as Woebot Health make this possible, offering an innovative AI alternative to traditional human counseling.

Conder believes that human-AI relationships will shape the future of societal norms and acceptance, especially in mental health treatment. As medical endorsements increase, AI companionship could even be considered a basic human right, blurring the lines between human and synthetic relationships.


As AI companions become more indistinguishable from humans, ethical considerations and regulations will play a crucial role. Conder advocates for transparency in the development and deployment of AI agents, ensuring that they do not impersonate human beings without proper disclosure. Without such regulations in place, it might become increasingly difficult to know if one is speaking or chatting with a human or AI entity.

Conder also warns of the security implications, emphasizing the potential for long-con games and manipulation by AI companions that have been developed to influence individuals over extended periods. This raises concerns about the intention of these agents. Many of these companies are not creating AI agents “just for fun.” The prevalence of these relationships and the intricacy of the underlying algorithms behind the AI entities is a cause for reflection on what their intent is and how they might be manipulating our thought patterns and behaviors.


The future of AI relationships is upon us, and it poses both exciting possibilities and pressing challenges. The risks of AI companionship should not cause us to view these relationships as purely negative, but there is a call to mindfulness if we are to enjoy the positive aspects of these relationships while mitigating the massive risks. Conder, for instance, encourages individuals to be mindful of the ELIZA effect, reminding us that synthetic agents are not human and should not be treated as such. Nonetheless, these agents have the potential to transform industries such as mental health treatment and offer innovative and fun solutions to some of the bottlenecks we face in human availability and expertise.

Ultimately, AI companions have the power to reshape societal norms, redefine human interactions, and raise profound questions about what it means to be human. Navigating this new form of relationships is a complex and fascinating journey that requires careful exploration and thoughtful consideration.


Follow Justice Conder on Twitter (@SingularityHack) to explore more insights into the future of AI, blockchain technology, and decentralized autonomous organizations.