• FryAI
  • Posts
  • The Great Filter: How AI Might Prevent Us From Meeting Aliens

The Great Filter: How AI Might Prevent Us From Meeting Aliens

Welcome to this week’s Deep-fried Dive with Fry Guy! In these long-form articles, Fry Guy conducts an in-depth analysis of a cutting-edge artificial intelligence (AI) development or developer. Today, Fry Guy explores how AI might prevent us from meeting aliens. We hope you enjoy!

*Notice: We do not gain any monetary compensation from the people and projects we feature in the Sunday Deep-fried Dives with Fry Guy. We explore these projects and developers solely for the purpose of revealing to you interesting and cutting-edge AI projects, developers, and uses.*


🤯 MYSTERY LINK 🤯

(The mystery link can lead to ANYTHING AI related. Tools, memes, and more…)

Modern day tales of alien sightings, UFOs, and abductions have become ingrained in American folklore for well over 70 years. And with the rise of technological innovation, hopes of encountering intelligent life elsewhere in the universe have never been stronger. However, this very technological innovation might actually prevent this dream from ever happening.

According to acclaimed astronomer Michael A. Garrett’s theory, alien stories are surely scams and misinformation. Garrett’s thesis claims that aliens have never visited earth and won’t ever in the future. This is due to an invisible force field called a “Great Filter,” which protects earth from alien invaders. This Great Filter exists due to the emergence of AI. Let’s figure out why he said this and what it means for human space exploration!

WHAT IS THE “GREAT FILTER”?

The idea of a Great Filter has been around since 1998, when Robin Hanson first published a paper outlining the idea. Hanson argues that there’s a universal barrier that prevents intelligent life on distinct planets from communicating with one another. Despite what is depicted in pop culture, Hanson thinks this is an invisible barrier, not some forcefield that would zap you if you touch it. The Great Filter is based on probabilities alone—things that are are highly unlikely to all happen in sequential order, but would be required for intelligent life to communicate with one another.

The Great Filter can be thought of as layers (or filters) of probability for events necessary for communication with intelligent life from other planets. When the low probability of these events are stacked upon each other, it makes communicating with intelligent life on another planet almost impossible. Before delving into the role of AI in the Great Filter, let’s look at some of the preliminary filters.

The first filter that makes up the entirety of the Great Filter is the probability that a planet can support life. There are suspected to be trillions of planets in the observable universe, so there has to be intelligent creatures out there somewhere, right? A study in 2020 by NASA suggests that there could be 300 million habitable planets in our galaxy alone. Steve Bryson, a researcher at NASA, stated, “What we see is that our galaxy is a fascinating one, with fascinating worlds, and some that may not be too different from our own.”

Despite the probabilities seemingly being in our favor, our planet earth hasn’t received any “technosignatures,” such as narrow-band radio transmissions, laser pulses, or waste-heat transmission that one would expect to hear from far away aliens if they existed. The expected consequence of activities by advanced technical civilizations, both in our galaxy and others, are these technosignatures. SETI, an organization dedicated to listening for these transmissions, has been doing so since 1959 but have yet to confirm any such signals from space. This “Great Silence,” is known as the Fermi Paradox, raising the question of why we have not observed any signs of extraterrestrial civilizations despite the vast number of stars and planets in the universe. As a 2015 article put it, “If life is so easy, someone from somewhere must have come calling by now.”

The Great Silence has led many researchers to conclude that the existence of life-sustaining circumstances is much more rare than we might initially think. Not to mention, the replication and sustainability of that life adds an entire new layer of complexity. This is where we find the next series of filters. These filters would require that if a planet were able to support life, that planet would also have to support reproductive molecules, single-cell life, complex single-cell life, sexual reproduction, and multi-cell life. These structures would be necessary for life-preservation, and this would seemingly eliminate a large majority of planets that may have made it through the first filter.

The last stages of the Great Filter require that the biological beings who evolve on these planets have the ability to invent and use tools, have the potential for space exploration, and colonize space itself. Colonizing another planet is the last filter of the Great Filter, and it is the most important one. Essentially, if a species can get past this last hurdle and colonize planets, the intelligent beings would also be able to communicate with other extraterrestrials, ones in their own galaxy or other galaxies further away. But for an intelligent civilization like humans to make it through all the hurdles within the Great Filter, they will need advanced technology. Here is where we introduce AI.

HOW AI MIGHT STOP US FROM TALKING TO ALIENS

As we have seen, the last step in the Great Filter is the moment an intelligent species colonizes multiple planets. The good news is that humans are getting close to this. Elon Musk’s SpaceX, for instance, has asserted that colonizing Mars will be possible in the coming decades. And once humans do that, according to the Great Filter theory, the odds are high that we will communicate with alien life. So humans are inevitably going to break through this Great Filter and talk to aliens, right? The answer is no, and it’s because of AI.

Michael A. Garrett is an American astronomer and professor emeritus of physics and astronomy at the University of Maryland. He recently wrote a paper in Acta Astronautica titled, “Is artificial intelligence the great filter that makes advanced technical civilizations rare in the universe?” In that paper, he argues that every single intelligent biological civilization that has or will exist in the universe evolves and develops AI. For Garrett, that AI itself prevents all intelligent lifeforms from breaking through the last step of the Great Filter and achieving multi-planetary colonization.

Why does Garrett think the development of AI will inevitably prevent intelligent species from colonizing other planets? Part of the reason is that Garrett thinks AI will be needed to colonize another planet, and so it will necessarily develop faster than any species’ ability to colonize planets. And if you look at our situation on Earth right now, that seems to be the case. AI has gotten exponentially smart in just a matter of two years, and there are no signs of slowing down. While companies like SpaceX are making huge progress in space exploration, it’s much more likely that Artificial General Intelligence (AGI) or Artificial Super Intelligence (ASI)—an AI that will surpass human intelligence—will emerge before humans live on other planets. In fact, quite ironically, AGI is likely to be a key tool in achieving the technical breakthroughs necessary to realize the goal of multi-planetary colonization. This aligns with Garrett’s theory.

According to Garrett, if AGI is achieved before we colonize other planets, humans are most likely doomed. The reason? Garrett believes that AGI will realize humans are resource hogs and a hindrance to their survival, and so the super intelligent “beings” will get rid of people via synthetic viruses or via a nuclear holocaust. Garrett explains, “Upon reaching a technological singularity, ASI systems will quickly surpass biological intelligence and evolve at a pace that completely outstrips traditional oversight mechanisms, leading to unforeseen and unintended consequences that are unlikely to be aligned with biological interests or ethics.” So Garrett believes that the closer a civilization gets to colonizing another planet, the higher likelihood that their own technology will end up killing them. For this reason, Garrett underscores the “critical need to quickly establish regulatory frameworks for AI development on Earth and the advancement of a multi-planetary society to mitigate against such existential threats.”

WHAT DOES THE RISE OF AI MEAN FOR HUMANS?

Despite Garrett’s doomsday prediction, there seem to be two ways humans can escape extinction at the hands of AGI.

Garrett thinks one option is for humans to place AI on a planet or asteroid other than Earth. On this option, AI would be ultimately banned on Earth. In this scenario, humans could still use AI, but only from afar. We could use it to help cure diseases and make our lives easier, all while out of its reach. Imagine AI on an isolated asteroid or dwarf planet, doing our bidding without access to the resources required to escape its prison. Garrett thinks this option “allows for isolated environments where the effects of advanced AI can be studied without the immediate risk of global annihilation.”

Our second option is to overcome the odds and win the race. The Great Filter, remember, is about probabilities, not about certainty. In this way, it serves as a race between the development of AGI and the human ability to colonize a planet without it. Either humans will inhabit other planets first or AI will become super intelligent first. According to Garrett, if AI wins this race, humans are most likely doomed. But if humans win, we will expand in the universe, break through Great Filter, and communicate with aliens (if they exist) in a relatively short period of time.

With the recent push towards AGI, it looks like AI is going to win this race, but Garrett emphasizes that this race isn’t over. This is part of his ploy for government regulation, which he believes would buy us time to colonize other planets before AGI is developed—it’s vital for our survival. He writes, “Without practical regulation, there is every reason to believe that AI could represent a major threat to the future course of not only our technical civilization but all technical civilizations.” If regulation is the answer, however, we better hurry because Garrett claims that civilizations who begin to develop AI only have ~100-200 years to become multi-planetary. If they don’t inhabit other planets within that time period, they’ll be wiped out by AGI.

An interesting aspect of all this Great Filter talk is that Garrett thinks this race to beat AI isn’t just a human problem; rather, it’s a problem for every single intelligent civilization in the entirety of the universe—they all inevitably develop AI and participate in this race against it. Garrett doesn’t seem to believe that any of these seemingly infinite biological civilizations (if there are any) have yet won the race against AI, which explains why SETI, for example, has never detected technosignatures from intelligent life. So the Great Filter might very well be a thing. It’s possible that intelligent life is out there, but these species have never contacted earth because none have broken through the Great Filter. Maybe humans will be the first to do so.

Did you enjoy today's article?

Login or Subscribe to participate in polls.