Close Menu
Alan C. Moore
    What's Hot

    Biden Family Longtime Doctor Under Scrutiny Amid Stage 4 Cancer Diagnosis

    May 20, 2025

    Judge Shows No Mercy in Vote Theft Case

    May 20, 2025

    We Need to Heed Eisenhower’s Warning About a ‘Scientific Elite’

    May 20, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Biden Family Longtime Doctor Under Scrutiny Amid Stage 4 Cancer Diagnosis
    • Judge Shows No Mercy in Vote Theft Case
    • We Need to Heed Eisenhower’s Warning About a ‘Scientific Elite’
    • Another Blue City Luxury High-Rise Goes Belly-Up
    • ‘Now do the H-1B visas’: Trump ally says after US announces restrictions on Indian travel agents
    • Google’s AI Boss Says Gemini’s New Abilities Point the Way to AGI
    • With AI Mode, Google Search Is About to Get Even Chattier
    • Noem denies plans to put immigrants through reality TV show
    Alan C. MooreAlan C. Moore
    Subscribe
    Tuesday, May 20
    • Home
    • US News
    • Politics
    • Business & Economy
    • Video
    • About Alan
    • Newsletter Sign-up
    Alan C. Moore
    Home » Blog » How Peter Thiel’s Relationship With Eliezer Yudkowsky Launched the AI Revolution

    How Peter Thiel’s Relationship With Eliezer Yudkowsky Launched the AI Revolution

    May 20, 2025Updated:May 20, 2025 Tech No Comments
    business sam altman peter thiel optimist jpg
    business sam altman peter thiel optimist jpg
    Share
    Facebook Twitter LinkedIn Pinterest Email

    It would be hard to overstate the impact that Peter Thiel has had on the career of Sam Altman. After Altman sold his first startup in 2012, Thiel bankrolled his first venture fund, Hydrazine Capital. Thiel saw Altman as an inveterate optimist who stood at “the absolute epicenter, maybe not of Silicon Valley, but of a Silicon Valley zeitgeist.” As Thiel put it, “If you had to look for the one person who represented a millennial tech person, it would be Altman.”

    Each year, Altman would point Thiel toward the most promising startup at Y Combinator–Airbnb in 2012, Stripe in 2013, Zenefits in 2014–and Thiel would swallow hard and invest, even though he sometimes felt like he was being swept up in a hype cycle. Following Altman’s advice brought Thiel’s Founders Fund some immense returns.

    Thiel, meanwhile, became the loudest voice critiquing the lack of true technological progress amidst all the hype. “Forget flying cars,” he quipped during a 2012 Stanford lecture. “We’re still sitting in traffic.”

    By the time Altman took over Y Combinator in 2014, he had internalized Thiel’s critique of “tech stagnation” and channeled it to remake YC as an investor in “hard tech” moonshots like nuclear energy, supersonic planes—and artificial intelligence. Now it was Altman who was increasingly taking his cues from Thiel.

    And if it’s hard to exaggerate Thiel’s effect on Altman, it’s similarly easy to understate the influence that an AI-obsessed autodidact named Eliezer Yudkowsky had on Thiel’s early investments in AI.

    Though he has since become perhaps the world’s foremost AI doomsday prophet, Yudkowsky started out as a magnetic, techno-optimistic wunderkind who excelled at rallying investors, researchers, and eccentrics around a quest to “accelerate the singularity.”

    In this excerpt from the forthcoming book The Optimist, Keach Hagey describes how Thiel’s relationship with Yudkowsky set the stage for the generative AI revolution: How it was Yudkowsky who first inspired one of the founders of DeepMind to imagine and build a “superintelligence,” and Yudkowsky who introduced the founders of DeepMind to Thiel, one of their first investors. How Thiel’s conversations with Altman about DeepMind would help inspire the creation of OpenAI. And how Thiel, as one of Yudkowsky’s most important backers, inadvertently seeded the AI-apocalyptic subcultures that would ultimately play a role in Sam Altman’s ouster, years later, as CEO of OpenAI.

    Like Sam Altman, Peter Thiel had long been obsessed with the possibility that one day computers would become smarter than humans and unleash a self-­reinforcing cycle of exponential technological progress, an old science fiction trope often referred to as “the singularity.” The term was first introduced by the mathematician and Manhattan Project adviser John von Neumann in the 1950s, and popularized by the acclaimed sci-­fi author Vernor Vinge in the 1980s. Vinge’s friend Marc Stiegler, who worked on cybersecurity for the likes of Darpa while drafting futuristic novels, recalled once spending an afternoon with Vinge at a restaurant outside a sci-­fi convention “swapping stories we would never write because they were both horrific and quite possible. We were too afraid some nutjob would pick one of them up and actually do it.”

    Among the many other people influenced by Vinge’s fiction was Eliezer Yudkowsky. Born into an Orthodox Jewish family in 1979 in Chicago, Yudkowsky was son of a psychiatrist mother and a physicist father who went on to work at Bell Labs and Intel on speech recognition, and was himself a devoted sci-­fi fan. Yudkowsky began reading science fiction at age 7 and writing it at age 9. At 11, he scored a 1410 on the SAT. By seventh grade, he told his parents he could no longer tolerate school. He did not attend high school. By the time he was 17, he was painfully aware that he was not like other people, posting a web page declaring that he was a “genius” but “not a Nazi.” He rejected being defined as a “male teenager,” instead preferring to classify himself as an “Algernon,” a reference to the famous Daniel Keyes short story about a lab mouse who gains enhanced intelligence. Thanks to Vinge, he had discovered the meaning of life. “The sole purpose of this page, the sole purpose of this site, the sole purpose of anything I ever do as an Algernon is to accelerate the Singularity,” he wrote.

    Around this time, Yudkowsky discovered an obscure mailing list of a society calling itself the Extropians, which was the subject of a 1994 article in Wired that happened to include their email address at the end. Founded by philosopher Max More in the 1980s, Extropianism is a form of pro-­science super-­optimism that seeks to fight entropy—­the universal law that says things fall apart, everything tends toward chaos and death—­on all fronts. In practical terms, this meant signing up to have their bodies—­or at least heads—­frozen at negative 321 degrees Fahrenheit at the Alcor Life Extension Foundation in Scottsdale, Arizona, after they died. They would be revived once humanity was technologically advanced enough to do so. More philosophically, fighting entropy meant abiding by five principles: Boundless Expansion, Self-­Transformation, Dynamic Optimism, Intelligent Technology, and Spontaneous Order. (Dynamic Optimism, for example, involved a technique called selective focus, in which you’d concentrate on only the positive aspects of a given situation.)

    Robin Hanson, who joined the movement and became renowned for creating prediction markets, described attending multilevel Extropian parties at big houses in Palo Alto at the time. “And I was energized by them, because they were talking about all these interesting ideas. And my wife was put off because they were not very well presented, and a little weird,” he said. “We all thought of ourselves as people who were seeing where the future was going to be, and other people didn’t get it. Eventually—­eventually—­we’d be right, but who knows exactly when.”

    More’s co­founder of the journal Extropy, Tom Bell, aka T. O. Morrow (Bell claims that Morrow is a distinct persona and not simply a pen name), wrote about systems of “polycentric law” that could arise organically from voluntary transactions between agents free of government interference, and of “Free Oceana,” a potential Extropian settlement on a man-­made floating island in international waters. (Bell ended up doing pro bono work years later for the Seasteading Institute, for which Thiel provided seed funding.) If this all sounds more than a bit libertarian, that’s because it was. The WIRED article opens at one such Extropian gathering, during which an attendee shows up dressed like the “State,” wearing a vinyl bustier, miniskirt, and chain harness top and carrying a riding crop, dragging another attendee dressed up as “the Taxpayer” on a leash on all fours.

    The mailing list and broader Extropian community had only a few hundred members, but among them were a number of famous names, including Hanson; Marvin Minsky, the Turing Award–winning scientist who founded MIT’s AI lab in the late 1950s; Ray Kurzweil, the computer scientist and futurist whose books would turn “the singularity” into a household word; Nick Bostrom, the Swedish philosopher whose writing would do the same for the supposed “existential risk” posed by AI; Julian Assange, a decade before he founded WikiLeaks; and three people—­Nick Szabo, Wei Dai, and Hal Finney—­rumored to either be or be adjacent to the pseudonymous creator of Bitcoin, Satoshi Nakamoto.

    “It is clear from even a casual perusal of the Extropians archive (maintained by Wei Dai) that within a few months, teenage Eliezer Yudkowsky became one of this extraordinary cacophony’s preeminent voices,” wrote the journalist Jon Evans in his history of the movement. In 1996, at age 17, Yudkowsky argued that superintelligences would be a great improvement over humans, and could be here by 2020.

    Two members of the Extropian community, internet entrepreneurs Brian and Sabine Atkins—­who met on an Extropian mailing list in 1998 and were married soon after—­were so taken by this message that in 2000 they bankrolled a think tank for Yudkowsky, the Singularity Institute for Artificial Intelligence. At 21, Yudkowsky moved to Atlanta and began drawing a nonprofit salary of around $20,000 a year to preach his message of benevolent superintelligence. “I thought very smart things would automatically be good,” he said. Within eight months, however, he began to realize that he was wrong—­way wrong. AI, he decided, could be a catastrophe.

    “I was taking someone else’s money, and I’m a person who feels a pretty deep sense of obligation towards those who help me,” Yudkowsky explained. “At some point, instead of thinking, ‘If superintelligences don’t automatically determine what is the right thing and do that thing that means there is no real right or wrong, in which case, who cares?’ I was like, ‘Well, but Brian Atkins would probably prefer not to be killed by a superintelligence.’ ” He thought Atkins might like to have a “fallback plan,” but when he sat down and tried to work one out, he realized with horror that it was impossible. “That caused me to actually engage with the underlying issues, and then I realized that I had been completely mistaken about everything.”

    The Atkinses were understanding, and the institute’s mission pivoted from making artificial intelligence to making friendly artificial intelligence. “The part where we needed to solve the friendly AI problem did put an obstacle in the path of charging right out to hire AI researchers, but also we just surely didn’t have the funding to do that,” Yudkowsky said. Instead, he devised a new intellectual framework he dubbed “rationalism.” (While on its face, rationalism is the belief that humankind has the power to use reason to come to correct answers, over time it came to describe a movement that, in the words of writer Ozy Brennan, includes “reductionism, materialism, moral non-­realism, utilitarianism, anti-­deathism and transhumanism.” Scott Alexander, Yudkowsky’s intellectual heir, jokes that the movement’s true distinguishing trait is the belief that “Eliezer Yudkowsky is the rightful calif.”)

    In a 2004 paper, “Coherent Extrapolated Volition,” Yudkowsky argued that friendly AI should be developed based not just on what we think we want AI to do now, but what would actually be in our best interests. “The engineering goal is to ask what humankind ‘wants,’ or rather what we would decide if we knew more, thought faster, were more the people we wished we were, had grown up farther together, etc.,” he wrote. In the paper, he also used a memorable metaphor, originated by Bostrom, for how AI could go wrong: If your AI is programmed to produce paper clips, if you’re not careful, it might end up filling the solar system with paper clips.

    In 2005, Yudkowsky attended a private dinner at a San Francisco restaurant held by the Foresight Institute, a technology think tank founded in the 1980s to push forward nanotechnology. (Many of its original members came from the L5 Society, which was dedicated to pressing for the creation of a space colony hovering just behind the moon, and successfully lobbied to keep the United States from signing the United Nations Moon Agreement of 1979 due to its provision against terraforming celestial bodies.) Thiel was in attendance, regaling fellow guests about a friend who was a market bellwether, because every time he thought some potential investment was hot, it would tank soon after. Yudkowsky, having no idea who Thiel was, walked up to him after dinner. “If your friend was a reliable signal about when an asset was going to go down, they would need to be doing some sort of cognition that beat the efficient market in order for them to reliably correlate with the stock going downwards,” Yudkowsky said, essentially reminding Thiel about the efficient-market hypothesis, which posits that all risk factors are already priced into markets, leaving no room to make money from anything besides insider information. Thiel was charmed.

    Thiel and Yudkowsky began having occasional dinners together. Yudkowsky came to regard Thiel “as something of a mentor figure,” he said. In 2005, Thiel started funding Yudkowsky’s Singularity Institute, and the following year they teamed up with Ray Kurzweil—­whose book The Singularity Is Near had become a bestseller—­to create the Singularity Summit at Stanford University. Over the next six years, it expanded to become a prominent forum for futurists, transhumanists, Extropians, AI researchers, and science fiction authors, including Bostrom, More, Hanson, Stanford AI professor Sebastian Thrun, XPrize founder Peter Diamandis, and Aubrey de Grey, a gerontologist who claims humans can eventually defeat aging. Skype co­founder Jaan Tallinn, who participated in the summit, was inspired by Yudkowsky to become one of the primary funders of research dedicated to reducing existential risk from AI. Another summit participant, physicist Max Tegmark, would go on to co-found the Future of Life Institute.

    Vernor Vinge himself even showed up, looking like a public school chemistry teacher with his Walter White glasses and tidy gray beard, cheerfully reminding the audience that when the singularity comes, “We’re no longer in the driver’s seat.”

    In 2010, one of the AI researchers whom Yudkowsky invited to speak at the summit was Shane Legg, a New Zealand–­born mathematician, computer scientist, and ballet dancer who had been obsessed with building superintelligence ever since Yudkowsky had introduced him to the idea a decade before. Legg had been working at Intelligenesis, a New York–­based startup founded by the computer scientist Ben Goertzel that was trying to develop the world’s first AI. Its best-­known product was WebMind, an ambitious software project that attempted to predict stock market trends. Goertzel, who had a PhD in mathematics, had been an active poster on the Extropians mailing list for years, sparring affectionately with Yudkowsky on transhumanism and libertarianism. (He was in favor of the former but not so much the latter.) Back in 2000, Yudkowsky came to speak at Goertzel’s company (which would go bankrupt within a year). Legg points to the talk as the moment when he started to take the idea of superintelligence seriously, going beyond the caricatures in the movies. Goertzel and Legg began referring to the concept as “artificial general intelligence.”

    Legg went on to get his own PhD, writing a dissertation, “Machine Super Intelligence,” that noted the technology could become an existential threat, and then moved into a postdoctoral fellowship at University College London’s Gatsby Computational Neuroscience Unit, a lab that encompassed neuroscience, machine learning, and AI. There, he met a gaming savant from London named Demis Hassabis, the son of a Singaporean mother and Greek Cypriot father. Hassabis had once been the second-­ranked chess player in the world under the age of 14. Now he was focused on building an AI inspired by the human brain. Legg and Hassabis shared a common, deeply unfashionable vision. “It was basically eye-­rolling territory,” Legg told the journalist Cade Metz. “If you talked to anybody about general AI, you would be considered at best eccentric, at worst some kind of delusional, nonscientific character.” Legg thought it could be built in the academy, but Hassabis, who had already tried a startup and failed, knew better. The only way to do it was through industry. And there was one investor who would be an obvious place to start: Peter Thiel.

    Legg and Hassabis came to the 2010 Singularity Summit as presenters, yes, but really to meet Thiel, who often invited summit participants to his townhouse in San Francisco, according to Metz’s account. Hassabis spoke on the first day of the summit, which had moved to a hotel in downtown San Francisco, outlining his vision for an AI that took inspiration from the human brain. Legg followed the next day with a talk on how AI needed to be measurable to move forward. Afterward, they went for cocktails at Thiel’s Marina District home, with its views of both the Golden Gate Bridge and the Palace of Fine Arts, and were delighted to see a chessboard out on a table. They wove through the crowd and found Yudkowsky, who led them over to Thiel for an introduction. Trying to play it cool, Hassabis skipped the hard sell and began with chess, a topic he knew was dear to Thiel’s heart. The game had stood the test of time, Hassabis said, because the knight and bishop had such an interesting tension—­equal in value, but profoundly different in strengths and weaknesses. Thiel invited them to return the next day to tell him about their startup.

    In the morning, they pitched Thiel, fresh from a workout, across his dining room table. Hassabis said they were building AGI inspired by the human brain, would initially measure its progress by training it to play games, and were confident that advances in computing power would drive their breakthroughs. Thiel balked at first, but over the course of weeks agreed to invest $2.25 million, becoming the as-­yet-­unnamed company’s first big investor. A few months later, Hassabis, Legg, and their friend, the entrepreneur Mustafa Suleyman, officially co­founded DeepMind, a reference to the company’s plans to combine “deep learning,” a type of machine learning that uses layers of neural networks, with actual neuroscience. From the beginning, they told investors that their goal was to develop AGI, even though they feared it could one day threaten humanity’s very existence.

    It was through Thiel’s network that DeepMind recruited his fellow PayPal veteran Elon Musk as an investor. Thiel’s Founders Fund, which had invested in Musk’s rocket company, SpaceX, invited Hassabis to speak at a conference in 2012, and Musk was in attendance. Hassabis laid out his 10-­year plan for DeepMind, touting it as a “Manhattan Project” for AI years before Altman would use the phrase. Thiel recalled one of his investors joking on the way out that the speech was impressive, but he felt the need to shoot Hassabis to save the human race.

    The next year, Luke Nosek, a cofounder of both PayPal and Founders Fund who is friends with Musk and sits on the SpaceX board, introduced Hassabis to Musk. Musk took Hassabis on a tour of SpaceX’s headquarters in Los Angeles. When the two settled down for lunch in the company cafeteria, they had a cosmic conversation. Hassabis told Musk he was working on the most important thing in the world, a superintelligent AI. Musk responded that he, in fact, was working on the most important thing in the world: turning humans into an interplanetary species by colonizing Mars. Hassabis responded that that sounded great, so long as a rogue AI did not follow Musk to Mars and destroy humanity there too. Musk got very quiet. He had never really thought about that. He decided to keep tabs on DeepMind’s technology by investing in it.

    In December 2013, Hassabis stood on stage at a machine-learning conference at Harrah’s in Lake Tahoe and demonstrated DeepMind’s first big breakthrough: an AI agent that could learn to play and then quickly master the classic Atari video game Breakout without any instruction from humans. DeepMind had done this with a combination of deep neural networks and reinforcement learning, and the results were so stunning that Google bought the company for a reported $650 million a month later.

    The implications of DeepMind’s achievement—­which was a major step toward a general-­purpose intelligence that could make sense of a chaotic world around it and work toward a goal—­were not widely understood until the company published a paper on its findings in the journal Nature more than a year later. But Thiel, as a DeepMind investor, understood them well, and discussed them with Altman. In February 2014, a month after Google bought DeepMind, Altman wrote a post on his personal blog titled “AI” that declared the technology the most important tech trend that people were not paying enough attention to.

    “To be clear, AI (under the common scientific definition) likely won’t work. You can say that about any new technology, and it’s a generally correct statement. But I think most people are far too pessimistic about its chances,” he wrote, adding that “artificial general intelligence might work, and if it does, it will be the biggest development in technology ever.”

    A little more than a year later, Altman teamed up with Elon Musk to cofound OpenAI as a noncorporate counterweight to Google’s DeepMind. And with that, the race to build artificial general intelligence was on.

    This was a race that Yudkowsky had helped set off. But as it picked up speed, Yudkowsky himself was growing increasingly alarmed about what he saw as the extinction-level danger it posed. He was still influential among investors, researchers, and eccentrics, but now as a voice of extreme caution.

    Yudkowsky was not personally involved in OpenAI, but his blog, LessWrong, was widely read among the AI researchers and engineers who worked there. (While still at Stripe, OpenAI cofounder Greg Brockman had organized a weekly LessWrong reading group.) The rationalist ideas Yudkowsky espoused overlapped significantly with those of the Effective Altruism movement, which was turning much of its attention to preventing existential risk from AI.

    A few months after this race spilled into full public view with OpenAI’s release of ChatGPT in November 2022, Yudkowsky published an essay in Time magazine arguing that unless the current wave of generative AI research was halted, “literally everyone on Earth will die.”

    Thiel felt that Yudkowsky had become “extremely black-pilled and Luddite.” And two of OpenAI’s board members had ties to Effective Altruism. Less than a week before Altman was briefly ousted as CEO in the fall of 2023, Thiel warned his friend, “You don’t understand how Eliezer has programmed half the people in your company to believe this stuff.” Thiel’s warning came with some guilt that he had created the many-headed monster that was now coming for his friend.


    Source credit

    Keep Reading

    With AI Mode, Google Search Is About to Get Even Chattier

    Google’s AI Boss Says Gemini’s New Abilities Point the Way to AGI

    GitHub Copilot’s New AI Coding Agent Saves Developers Time – And Requires Their Oversight

    ‘A Billion Streams and No Fans’: Inside a $10 Million AI Music Fraud Case

    ‘A Billion Streams and No Fans’: Inside a $10 Million AI Music Fraud Case

    “We Acted Too Quickly”: Over Half of Companies Regret AI-Driven Layoffs, Report Finds

    Editors Picks

    Biden Family Longtime Doctor Under Scrutiny Amid Stage 4 Cancer Diagnosis

    May 20, 2025

    Judge Shows No Mercy in Vote Theft Case

    May 20, 2025

    We Need to Heed Eisenhower’s Warning About a ‘Scientific Elite’

    May 20, 2025

    Another Blue City Luxury High-Rise Goes Belly-Up

    May 20, 2025

    ‘Now do the H-1B visas’: Trump ally says after US announces restrictions on Indian travel agents

    May 20, 2025

    Google’s AI Boss Says Gemini’s New Abilities Point the Way to AGI

    May 20, 2025

    With AI Mode, Google Search Is About to Get Even Chattier

    May 20, 2025

    Noem denies plans to put immigrants through reality TV show

    May 20, 2025

    Did Joe Biden’s Brother Reveal His Terminal Diagnosis Last Year?

    May 20, 2025

    Rusks for the People

    May 20, 2025
    • Home
    • US News
    • Politics
    • Business & Economy
    • About Alan
    • Contact

    Sign up for the Conservative Insider Newsletter.

    Get the latest conservative news from alancmoore.com [aweber listid="5891409" formid="902172699" formtype="webform"]
    Facebook X (Twitter) YouTube Instagram TikTok
    © 2025 alancmoore.com
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.