• FryAI
  • Posts
  • Is Sam Altman Evil?: The Power, Wealth, and Controversy Behind OpenAI’s CEO

Is Sam Altman Evil?: The Power, Wealth, and Controversy Behind OpenAI’s CEO

Welcome to this week’s Deep-Fried Dive with Fry Guy! In these long-form articles, Fry Guy conducts in-depth analyses of cutting-edge artificial intelligence (AI) developments and developers. Today, Fry Guy dives into the meteoric rise (and potential fall) of OpenAI CEO Sam Altman. We hope you enjoy!

*Notice: We do not receive any monetary compensation from the people and projects we feature in the Sunday Deep-Fried Dives with Fry Guy. We explore these projects and developers solely to reveal interesting and cutting-edge AI developments and uses.*


🤯 MYSTERY LINK 🤯

(The mystery link can lead to ANYTHING AI-related. Tools, memes, and more…)

Sam Altman has become a household name across the world, joining the likes of Elon Musk, Jeff Bezos, Bill Gates, and Donald Trump. Everyone, from kindergarten students to factory workers, farmers, and retired veterans, knows who Altman is. If you don’t, you might be living under a rock (well, not exactly—but you get the point).

Altman’s rise to fame began on December 11th, 2015. On this day, OpenAI was born. The company set out with a bold mission:

"Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."

Fast forward to today, and the landscape has shifted dramatically. OpenAI has become a for-profit, $80+ billion dollar powerhouse. ChatGPT, its lead model, has over 200 million active users each month. And CEO Sam Altman has gone from a relatively unknown figure to one of the most prominent names in technology, acquiring immense wealth and influence along the way. But as Altman’s power has grown, so have concerns about his leadership and intentions. So we think it might be worthwhile to dig deeper into his story.

Is Sam Altman one of the most innovative businesspeople of all time, or is he an evil genius, carrying out a master plan to destroy humanity?

ALTMAN’S RISE TO FAME AND FORTUNE

Sam Altman wasn’t always the household name he is today. As a Stanford dropout, Altman co-founded a location-based social media app called Loopt in 2005. Needless to say, this app was not as successful as ChatGPT. But while his first endeavor did not achieve widespread success, Altman’s involvement in Silicon Valley deepened over time. He climbed the ranks of technology startups and went on to become a venture capitalist in Silicon Valley. After a few years, he became the president of Y Combinator, one of the world’s most prestigious startup incubators, helping to launch some of the world’s most successful tech companies.

In 2015, Altman hit the jackpot. He helped start OpenAI alongside Elon Musk, Greg Brockman, Ilya Sutskever, and others. In 2019, Altman became the CEO. And after just three short years, in November of 2022, OpenAI released ChatGPT. The world was changed forever.

Within two months, ChatGPT amassed 100 million users, achieving the fastest-growing user base of all time, rocketing Altman into the limelight and cementing his name alongside tech giants like Elon Musk and Mark Zuckerberg.

As ChatGPT grew exponentially, so did Altman’s fame and fortune. Altman’s net worth quickly skyrocketed to over $2 billion, and he was soon spotted driving a $4 million Koenigsegg Regera supercar and purchasing lavish properties, including a $27 million mansion in San Francisco and a $43 million, 12-bedroom estate in Hawaii. His rise was nothing short of meteoric. Altman was featured on almost every prominent podcast and news show and earned accolades like TIME Magazine’s “CEO of the Year” for 2023.

OPENAI’S SHIFTING CULTURE

OpenAI was started as a company that aimed to “benefit humanity as a whole, unconstrained by a need to generate financial return.” However, Altman’s growing power and wealth have led to a shift in OpenAI’s culture over time. Initially focused on developing AI to benefit humanity, the company transitioned to a for-profit model in 2019, when Altman became CEO. This shift sparked concerns that profit, rather than safety, was becoming the main priority.

The for-profit approach of OpenAI has been challenged by many, including Elon Musk, who left the company and subsequently filed a lawsuit against the company for abandoning their commitments to safety and deceiving investors and the public. Altman has also been criticized for silencing whistleblowers who attempt to leak safety violations. Not many employees are willing to speak about this publicly. That’s partly because OpenAI has been investigated for getting its workers to sign offboarding agreements with non-disparagement provisions upon leaving. If you refuse to sign one, you give up your equity in the company, which means you could potentially lose out on millions of dollars. These concerns became so serious that anonymous OpenAI employees wrote a proposal to the U.S. Government, serving as “a right to warn about advanced artificial intelligence.”

In late 2023, amidst safety and secrecy concerns, Altman was abruptly fired by OpenAI’s board of directors. According to inside sources, Altman was “not consistently candid in his communications,” leading to mistrust between Altman and the board as well as between employees. However, OpenAI got so much public relations flack for firing Altman, re-hiring him as CEO seemed like their only option at the time. As a result of bad press and investment dropouts, he was quickly rehired. But this time he was hired with a new, more pro-Altman board in place. This rehire solidified Altman’s control, making it far less likely that he could be ousted again.

As Altman’s influence grew with a new board of directors, so did tensions within OpenAI. Multiple high-profile figures involved in the company’s AI safety efforts began to leave, citing concerns over the company’s direction. Ilya Sutskever, OpenAI’s co-founder and chief scientist, quit to start his own AI safety startup—one truly aimed at safety over profits. Sutskever expressed growing concerns about the dangers of OpenAI’s technology and Altman’s handling of safety risks. He said that he was “increasingly worried that OpenAI’s technology could be dangerous and that Mr. Altman was not paying enough attention to that risk.” Jan Leike, who co-led OpenAI’s Superalignment (safety) team alongside Sutskever, resigned as well, voicing similar fears, claiming that “safety and cultural process have taken a backseat to shiny [AI] products.” In addition to Sutskever and Leike’s departures, John Schulman, another co-founder and a key member of OpenAI’s AI safety team, left the company to join Anthropic, one of OpenAI’s rivals.

All of these major departures have something common: These people were all high up on the OpenAI corporate ladder and were all working on AI safety initiatives within the company. The crux of the issue isn’t just that these employees left, but why they did. Almost all of them departed due to OpenAI’s “reckless desire to become the king of AI at all costs,” regardless of safety considerations.

THE POWER STRUGGLE FOR AGI

Altman’s ultimate goal is achieving AGI (artificial general intelligence)—a machine capable of human-level intelligence. In public interviews, he’s been candid about his willingness to do “whatever it takes” to reach this milestone. Earlier this year, he even sought a $7 trillion investment from the United Arab Emirates (UAE) to develop AI chips that could help OpenAI reach AGI faster than any competitors. Not only that, but Altman has invested millions of dollars in nuclear energy companies—companies that will help him power AI with unlimited energy.

This aggressive pursuit of AGI has raised eyebrows. Critics worry that OpenAI’s breakneck speed toward superintelligence comes with significant risks. Geoffrey Hinton, one of the leading voices in AI, has gone on record stating that there’s a “50/50 chance” that AI could outsmart and overtake humanity. Altman’s detractors fear that his drive for AGI, combined with his increasing power, could make him a modern-day tech villain who is flipping a coin on humanity’s destruction.

Adding to all this intrigue, Altman has been quietly preparing for a doomsday scenario. He has apparently stockpiled guns, gold, antibiotics, and potassium iodide at his sprawling Hawaiian estate. Whether these preparations are in response to potential AI risks or other global threats remains unclear, but it underscores the gravity with which Altman approaches the future.

Now, whether Altman is going to use his increasing power to become an evil overlord is up for debate. But one thing is for sure: Altman singlehandedly has more power and control over AI than any other person on earth. And he seems to cherish it. So Altman alone, if left unchecked, has the power to end humanity as we know it, if he makes poor decisions while developing AI. And that’s concerning considering his AI safety track record and his relentless pursuit of AGI.

A FINAL WORD (WARNING)

Sam Altman’s rapid rise to fame, wealth, and influence has made him one of the most powerful figures in AI. But as he continues his quest for AGI, concerns are mounting that he has been prioritizing speed and profits over safety. With many top executives leaving OpenAI and safety experts sounding alarms, the world will be watching closely to see whether Altman’s decisions will lead to groundbreaking advancements or catastrophic consequences. As Altman himself has stated, “history will be the judge.” In the meantime, while all of this plays out, if you see Altman cruising around in his $4 million dollar supercar, stop him, and ask him if you can stay at his doomsday compound in Hawaii if he screws up AI. At least you’ll be prepared!

Did you enjoy today's article?

Login or Subscribe to participate in polls.