If you want to talk for a long time about possible scientific and medical catastrophe, Jason Matheny is a pleasure to speak with.
Now CEO and chairman of Rand Corporation, Matheny has built a career out of thinking about such dreary situations. He delved into the kingdoms of medical advancement and cultivated flesh before turning his attention to nationwide protection. He was an scholar by training with a emphasis on public health.
He pushed for more attention to the risks of genetic weapons and poorly designed artificial intelligence as chairman of Intelligence Advanced Research Projects Activity, the study unit of the US intelligence community. In 2021, Matheny was tapped to become President Biden’s senior director on tech and national security concerns. And then he took over as CEO and president of Rand, the oldest volunteer think tank in the US, in July of last year. He has influenced US federal legislation on nuclear approach, the Vietnam War, and the growth of the internet.
Requirement discusses challenges like AI-enabled bioterrorism in compelling but cautious terms, Mr. Doomsday in a relaxed match. He is directing Rand to conduct research on the perilous risks to US politics, develop novel climate and energy strategies, and look for ways to conduct “competition without catastrophe” in China. However, his long-standing worries about natural arms and AI continue to be in high regard.
Onscreen with WIRED at the new Verify security conference in Sausalito, California, hosted by the Aspen Institute and Hewlett Foundation, he warned that AI is making it easier to learn how to build natural weapons and other potentially damaging tools. ( There is a reason why he joked that he would pay the tab at the bar the following day. ) The talk has been lengthened and made more concise.
Lauren Goode: To start, we may talk about your position at Rand and what you’re envisioning for the future it. Rand has a significant place in American history. You may have heard about the development of the internet thanks to it.
Jason Matheny: We’re also working out the pests.
Best. We will make the necessary adjustments now. Rand has even influenced nuclear approach, the Vietnam War, the area competition. What do you expect will determine the terms of your tenancy at Rand?
I really want to support sprout in three areas. First, we need a model for thinking about what [technological ] competition looks like without a culture to the middle on safety and security. How can we, for instance, ensure that China’s opposition is n’t catastrophic? A second area of interest is examining how to create a climate and energy technique for the nation that is appropriate for our technological needs, the infrastructure we have in place, and the finance that are at par with it.
And then a third place is understanding the threats to democracy straight then, not just in the United States but worldwide. We’re seeing a shift in the way that standards are applied to facts and evidence in policy debates. We have a group of incredibly troubled researchers at Rand who are witnessing this norm decline. I think that’s everything that’s happening not just in the United States but globally, alongside a rise of varieties of monarchy.
Before moving to the United States, I worked in public health on the viral disease control of tuberculosis and malaria. The second virus was created from scratch on a Darpa task in 2002, marking a momentous for the biosciences and the public health community because they realized biology would eventually turn into a possible misused engineering discipline. I was working with soldiers of the smallpox eradication promotion, and they thought, “Crap, we only spent decades eradicating a disease that now could be synthesized from scratch. ”
I therefore transitioned to a position in security, trying to figure out how to improve the security around biolabs but that they’re less likely to be used. How may we find natural weapons programs? Sadly, there are still a lot of them in a select several locations around the world. How may we instill more security into society so that we are more adaptable to both organic and engineered pandemics?
There’s a lot of risk that remains in world. This was demonstrated in Covid. In historical terms, this virus had a lower fatality rate than 1 %, compared to other natural viruses, which have fatality rates well above 50 %. There are artificial infections that have close to 100 percent mortality while still being as communicable as SARS-CoV-2. Even though we are extremely skilled at developing vaccines and manufacturing them swiftly, getting them approved today takes about as long as it did 20 years ago. But today, the amount of time you would need to treat a population is roughly the same as it was for our families and yet our grandparents.
When I first started getting engaged in biodiversity in 2002, it cost countless millions of dollars to develop a poliovirus, a very, very small disease. A pox virus, a very large virus, would have cost close to$ 1 billion to create. The cost is less than$ 100,000 today, so it’s a 10,000-fold decrease over that time period. However, vaccines have basically tripled in price over that period. The defense-offense symmetry is advancing in the incorrect way.
And who do you think poses the greatest threat to biorisks as our greatest attack?
Second is nature. Healthy infections continue to evolve. We’re going to experience more widespread popular pandemics. Some of them are going to be worse than Covid, some of them are going to be not as bad as Covid, but we’ve got to be adaptable to both. Covid alone cost the US economy more than$ 10 trillion, yet the federal investment in preventing the next pandemic is estimated to be between$ 2 billion and$ 3 billion.
Another category includes deliberate natural episodes. Aum Shinrikyo was a doom religion in Japan that had a natural weapons programme. They feared that they would be delivering a revelation by causing the world’s population to die. Luckily, they were working with less powerful 1990s biology. However, they finally turned to molecular arms and launched the Tokyo sarin gas problems.
Nowadays, there are individuals and organizations who have a common sense of purpose and are extremely interested in using science as a weapon. What’s preventing them from being able to use biology effectively are not controls on the tools or the raw materials, because those are all now available in many laboratories and on eBay—you can buy a DNA synthesizer for much less than$ 100,000 now. Most academic supply stores have all the supplies and supplies you need.
What an cataclysmic class may have is the ability to convert those equipment into biological weapons. There’s a worry that AI makes the know-how more widely accessible. Some of the research conducted by [A I safety and research company ] Anthropic examined risk assessments to see if someone who did n’t have a strong bio background could use these tools. Had they essentially receive graduate-level training from a huge language model from a modern tutor? Right now, perhaps never. However, if you look at the advancements made over the past few years, the barrier to entry for someone who wants to carry out a natural invasion is diminishing.
So we should tell everyone that tomorrow there will be an open pub.
Angry hours. We will pay the bill.
Everyone is currently discussing AI and the possibility of a super-intelligent knowledge outnumbering humans.
That’s going to get a tougher drink.
You are a good altruist, are n’t you?
According to the media, I am.
Do you think that’s how you would characterize yourself?
I do n’t believe I’ve ever genuinely acted as an effective altruist. And my woman, when she read that, she was like, “You are neither effective nor moral. However, it is undoubtedly true that Rand has effective altruists who are concerned about AI health. And it is a group of people who, in part, came from computer technology, have been concerned about AI protection for longer than many others.
So you’re not an efficient idealist, you’re saying, but are anyone who’s been very careful about AI for a long time, like some powerful individualists are. What did you think years back when you realized we needed to be careful about introducing AI into the earth?
I believe that knowledge is what I learned when I realized that so much of what we rely on to protect ourselves from biological abuse. [AI] that you make very specialized understanding easier to acquire without guardrails is not an emphatic great. Nuclear technology may be developed. So will the knowledge of biological weapons. There will be virtual weapons understanding. So we must find a way to balance the advantages and risks of tools, including those that may produce very specialized knowledge, such as that about weapons systems.
Yet before 2016, it was obvious that this was going to occur. James Clapper [former US director of national intelligence ] was also worried about this, but but was President Obama. In October 2016, there was a Designed discussion. Obama cautioned that AI might be responsible for fresh attacks and that he spent” a lot of time worrying” about epidemics. —Editor] I think he was worried about what happens when you can do software executive much, much faster that’s focused on generating ransomware at scale. Practically automating a workplace yields a million people who are constantly coding book malware without falling asleep.
It will also increase our security, because it will allow us to have security improvements that are multiplied by one. So one of the big questions is whether there will be some sort of digital act or cyber security healthy benefits as this stuff scales. What will that look like over the long run? I have no idea the response to that query.
Do you think it’s at all possible that we will enter any kind of AI winter or a slow-down at any point? Or is this just hockey-stick growth, as the tech industry likes to say?
It’s difficult to imagine how much it is actually slowing down right now. Instead it seems there’s a positive feedback loop where the more investment you put in, the more investment you’re able to put in because you’ve scaled up.
So I do n’t believe we’ll experience an AI winter, but I do n’t know. Rand has previously engaged in some fantastic forecasting experiments. There was a project that we did in the 1950s to forecast what the year 2000 would be like, and there were lots of predictions of flying cars and jet packs, whereas we did n’t get the personal computer right at all. Therefore, forecasting out too far is probably no better than a coin flip.
How uneasy are you about using AI in military operations like those that involve drones?
There are a lot of reasons why countries are going to want to make autonomous weapons. One of the causes of this petri dish of autonomous weapons is found in Ukraine, which is one of the reasons. It’s very tempting to want to have autonomous weapons that do n’t need to call home because of the radio jamming being used.
But I think cyber [warfare ] is the realm where autonomy has the highest benefit-cost ratio, both because of its speed and because of its penetration depth in places that ca n’t communicate.
But how do you feel about the high-error rate autonomous drones having moral and ethical implications?
I believe the empirical analysis on error rates has been mixed. [ Some analyses ] found that autonomous weapons were probably having lower miss rates and probably resulting in fewer civilian casualties, in part because [human ] combatants sometimes make bad decisions under stress and under the risk of harm. In some circumstances, using autonomous weapons could lead to fewer civilian fatalities.
However, this is a subject in which it is extremely difficult to predict what the future of autonomous weapons will look like. Many countries have banned them entirely. Other nations have a similar adage that applies:” Well, let’s wait and see what they look like and how precise they are before making decisions. ”
I think that one of the other questions is whether autonomous weapons are more advantageous to countries that have a strong rule of law over those that do n’t. One reason to be very wary of autonomous weapons would be the cost. If you have a supply chain that you can access, but you have very little human capital, but you have plenty of money to burn, and that makes wealthy autocracies more appealing than democracies that have a strong investment in human capital. It’s possible that autonomous weapons will be advantageous to autocracies more than democracies.
You’ve indicated that Rand will increase its investment in China analysis, particularly in those areas where its industrial policy, domestic politics, and economy are poorly understood. Why did this investment increase?
[ The US-China relationship ] is one of the most important competitions in the world and also an important area of cooperation. In order for this century to go well, we must get both things right.
Since the War of 1812, the US has n’t faced a strategic rival, accounting for more than two-thirds of our GDP. So [we need ] an accurate assessment of net strengths and net weaknesses in various areas of competition, whether it’s in economic, industrial, military, human capital, education, or talent.
And where are the areas where the US and China can collaborate for mutual benefit? Non-proliferation, climate, certain types of investments, and pandemic preparedness are all factors. I think getting that right really matters for the two largest economies in the world.
I recently had the opportunity to speak with Nvidia’s CEO, Jensen Huang, about US export controls. For one thing, Nvidia is prohibited from exporting its most powerful GPUs to China as a result of the regulations passed in 2022. How effective is that strategy in the long term?
Can China obtain those chips through other means if the US succeeded in preventing the shipment of advanced chips like [Nvidia ] H100s to China? Can China produce its own chips that, while not as developed, might still perform well enough for the kinds of capabilities that we might be concerned about?
If you’re a national security decisionmaker [in China ] and you’re told,” Hey, we really need this data center to create the arsenal of offensive tools we need. We’ll have to pay four times more because of having a bigger energy bill and it’ll be slower because it wo n’t be as cost-effective as using H100s,” you’re probably going to pay the bill. Then, what point does a decisionmaker no longer want to pay the bill? Is it 10X the cost? Is it 20X? The solution to that query is unknown to us.
But certain kinds of operations are no longer possible because of those export controls. Because [the chip technology is kind of stuck in China, and the rest of the world will continue to advance, the gap between what you can get a Huawei chip to do and what you can get an Nvidia chip to do keeps growing. And that prevents a certain level of computing efficiency that could be useful for a variety of military operations. And I think New York Times reporter Paul Mozur was the first to break the news that Nvidia chips were powering the Xinjiang data center that’s being used to monitor the Uighur prison camps in real time.
That raises a really important question: Should those chips be entering a data center that is being used for human rights violations? Simply doing the math is really important, and that’s what we focus on at Rand, regardless of one’s opinion of the policy.