I’m writing this while on a flight from Washington, DC, to the Bay Area, which is rather appropriate for I have recently realized that there’s yet one more reason to fear a second Trump presidency: artificial intelligence.
One does not have to be a techno-doomer to be concerned that AI will have a dramatic impact on our world and, yes, possibly become a threat. It’s another source of power—and one that could conceivably become a power of its own. The crucial question is, who controls AI and its use? Can it be safely developed and implemented? It seems there ought to be rules. But who will write them and who will enforce them?
The recent reports from Silicon Valley have not been encouraging. This month, a group of past and current employees at OpenAI, one of the leading outfits in this field and the creator of ChatGPT, issued a warning: The firm is putting profits ahead of safety and rushing the development of products that may be dangerous, as it seeks to build artificial general intelligence, a.k.a. AGI—a program that can do anything a person can. You might recall OpenAI was in the news a few months ago when its board surprisingly dumped CEO Sam Altman amid concerns about his stewardship of the company, which originally was founded to lead the way in the responsible and prudent pursuit of AI technology. It apparently veered off that course. Yet Altman ultimately was reinstated, a signal the company would move full speed ahead. (OpenAI also generated headlines when actress Scarlett Johansson accused it of stealing her voice for one of its AI products.)
William Saunders, a research engineer who left OpenAI in February and joined in that warning, told the New York Times, “When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward.’” Daniel Kokotajlo, a former OpenAI researcher who helped organize these critics, told the newspaper that the company is “recklessly racing” ahead, that he believes there’s a 50 percent chance AGI will arrive by 2027, and a 70 percent probability advanced AI will annihilate or catastrophically affect humanity.
Other experts aren’t as gloomy as Kokotajlo. But…yikes.
There have been plenty of reports in the past year or two of Big Tech companies, including Google and Microsoft, too quickly releasing AI products and not heeding the cautions of safety-minded personnel. No surprise, since AI is the new gold rush, and these firms don’t want to be left behind. The Silicon Valley ethos has tended to be get there first and worry about problems later. That thinking regarding AI is exponentially more dangerous than, say, with a ride-share app.
The introduction of AI into our world has accelerated. In recent weeks, Facebook, Apple, and Google have injected it into its products. OpenAI introduced a new and more powerful version of its chatbot and forged partnerships with a number of media organizations. As the Atlantic’s Charlie Warzel put it,
Technology companies…are racing to capture money and market share before their competitors do and making unforced errors as a result. But though tech corporations may have built the hype train, others are happy to ride it. Leaders in all industries, terrified of missing out on the next big thing, are signing checks and inking deals, perhaps not knowing what precisely it is they’re getting into or if they are unwittingly helping the companies who will ultimately destroy them. The Washington Post’s chief technology officer, Vineet Khosla, has reportedly told staff that the company intends to “have A.I. everywhere” inside the newsroom, even if its value to journalism remains, in my eyes, unproven and ornamental. We are watching as the plane is haphazardly assembled in midair.
And there’s no FAA looking over this. The people driving the hype train or piloting the being-assembled-in-midair plane are the Big Tech billionaires and their minions. Clearly, they do not have the public interest—or public safety—foremost in mind.
That’s why it was unnerving to see Donald Trump trek to San Francisco earlier this month to bag $12 million at a fundraiser for tech execs organized by venture capitalists David Sacks and Chamath Palihapitiya. Palihapitiya has previously raised money for both Democrats and Republicans (including Ted Cruz and Vivek Ramaswamy). In recent years, he has supported immigration reform and the expansion of low-income housing and has expressed his regret for helping Facebook become a behemoth, noting, “The short-term, dopamine-driven feedback loops that we have created are destroying how society works: no civil discourse, no collaboration, misinformation, mistruth and it’s not an American problem.” Yet he now finds Trump, the candidate of disinformation and divisive discourse, acceptable?
Sacks is a pal of billionaire Peter Thiel, and though he donated $70,000 to Hillary Clinton’s 2016 campaign, he has since become a prominent booster of Republicans, including J.D. Vance and Ron DeSantis. He’s been a loud opponent of US miliary assistance to Ukraine, claiming it will lead to “Woke War III” and has actively amplified right-wing conspiracy theories on social media.
Another attendee at the event was Eoghan McCabe, the chief executive of Intercom, a messaging company. In 2017, he and his firm decried Trump’s proposed Muslim ban. Under the headline “Supporting our Muslim sisters and brothers in tech,” he wrote, “We feel compelled as humans to see if we can try to ease the new suffering of some, by even a small amount.” His company offered to pay the legal fees for Muslim tech workers who wanted to relocate to Dublin, Ireland. The point was to send a “message” to the Trump administration. Six years later, he got into hot water within his own firm when he halted its support for Pride celebrations. He now crows that he and the other tech execs are backing Trump “for his policies on war, immigration, crypto, and more,” calling this election “a referendum on those issues.” That’s quite a journey.
In general, the SV crowd does not want pesky government oversight. Especially on cryptocurrency. And they want AI decision-making to be in their own hands. Let the disruptors and innovators rule! And how much interest do you think Trump has in regulating AI?
There’s a question whether Congress and the federal government are up to the task of protecting us from the possible dangers of AI. The technology is expanding at an intense pace—a speed far beyond what our government is usually capable of handling. We’ve seen members of Congress during hearings with tech executives display plenty of ignorance. There are, however, tech-ish legislators on both sides of the aisle who have been gathering and pondering what to do about AI, and President Joe Biden in October signed an extensive 111-page executive order on AI that sought to establish standards and guidelines for its development. But none of this is keeping up with the challenge, which could be an existential one.
Obviously, Trump would be no improvement. He would be worse. Look at his approach to climate change and energy. In April, he met with energy company executives and lobbyists at his Mar-a-Lago club and tried to cut a deal. They should donate $1 billion to his campaign, he said, because he would cut environmental safeguards that govern their industry. This was cynical transactionalism. Screw the world, gimme money.
Trump’s approach to AI is unlikely to be any different. A bunch of tech billionaires just handed him a bundle of campaign loot. For Trump, that’s love. And more may be coming. He’s not going to take them on. Moreover, a guy who doesn’t understand how magnets or electric boats work is not able to sort through the tough issues of artificial intelligence. Trump would be a dream president for the tech robber barons who desire free rein to do whatever they want with AI.
On a podcast after his fundraising trip to San Francisco, Trump was asked about AI. He gave a rambling answer in which he dwelled on the fact that he had pocketed $12 million from the tech executives and marveled at the accuracy of deepfakes. His bottom line on AI: “As long as it’s there, let’s see how it works out.” How reassuring.