Background

We are facing new challenges and extreme risks. Our approach to security must adapt.

Rapid technological change and the reemergence of great power competition are affecting geopolitics and creating novel risks. These developments increase the speed, complexity, and number of actors in 21st century conflicts—undermining stability and increasing the dangers of conflict escalation. Consequently, long-term national security is becoming increasingly international in scope and concerned with governing powerful emerging technologies. As Richard Danzig puts it, “[AI and synthetic biology] are developing faster than our mechanisms of control, and they have the potential for producing especially traumatic, one could say catastrophic, consequences.”

Today’s choices will significantly influence the course of the twenty-first century.
— RAND Corporation

Some of today’s choices will have enormous, enduring impacts, while others may do significant harm—even inadvertently. Today we are witnessing the costs of burning fossil fuels on the health of our climate. Going forward, we need to mitigate the harm current strategies impose on our long-term wellbeing.

However, our national security institutions are not prepared to navigate the far-reaching consequences of today’s decisions.

The defense establishment overwhelmingly favors the present over the needs of the future.
— Christian Brose, "The Kill Chain"

Experience with previous technological revolutions—like the development of aviation and nuclear weapons—should inform discussions about current efforts to control artificial intelligence (AI), synthetic biology, and other emerging technologies.

However, these challenges are larger and more complicated than those we have faced before.

The premises that have guided us from World War II to the present must be modified for the future.
— Johns Hopkins Applied Physics Laboratory

We are no longer in a world in which we can reliably prevail with overwhelming force or resources. In the words of Richard Danzig, “Technological superiority is not synonymous with security.”

These new, powerful forces will disrupt global stability. Advanced AI and biotechnology will radically alter today’s security environment. So, too, will developments like great power conflict, pandemics, nuclear proliferation, and climate change.

National security leaders are starting to recognize the need for change, but our institutions are slow to catch up.

Low-probability, high-impact events are difficult to forecast and expensive to prepare for but [our efforts] can provide some resilience to exogenous shocks.
— U.S. National Intelligence Council, "Global Trends: 2040"

So often, federal funding–and, therefore, priorities–are better suited to fighting the last war than the next.

Many factors contribute to this institutional lag: psychological biases favoring the near-term, misaligned incentives within national security organizations, and the insufficient practices for predicting and planning for the next crisis. As a House Armed Service Committee member put it, “the past is over-represented in Washington” because “the future has no lobbyists.”

Despite the lack of investment, one historic lesson is clear:

The prevention of the supreme catastrophe ought to be the paramount object of all endeavor.
— Winston Churchill

Threats

NET members focus on some of the biggest issues of our time.

Catastrophic Biological Risks

COVID-19 has caused global devastation and demonstrated the urgent need to improve U.S. national strategy and responses to biological threats. Yet, progress in and proliferation of biotechnology is likely to increase both the frequency and severity of future biological threats, whether from lab accidents or the deliberate release of engineered pathogens. As Kevin Esvelt points out, the United States “has lost more citizens to the pandemic than it has in all military conflicts in the past century, yet it devotes less than 1% of its defense budget to biodefense”.

We should invest in robust early warning and rapid response capacities, countermeasure development, and improved international coordination. Collectively, these measures would allow the nation to stop future pandemics in their tracks—avoiding a staggering number of lives lost and damage to health, national readiness, and the economy.

Destabilizing Artificial Intelligence

“Artificial Intelligence (AI) technologies promise to be…a source of enormous power for countries that harness them.” However, as powerful as modern machine learning is, the technology also remains profoundly fragile—even the most advanced AI systems can unpredictably fail. The risk of system failures causing significant harm increases as machine learning becomes more widely used, especially in areas where safety and security are critical.

To reap the benefits of AI and mitigate risks of catastrophic failure, the nation needs to invest in research aimed at ensuring the safety and reliability of AI and its resilience against attacks from malicious actors. In addition, as Stephen Hawking put it, “Whereas [AI’s] short-term impact…depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

Modern Nuclear War

In the post-Cold War world, awareness of nuclear war has faded into the background—but the threat never went away. Indeed, the risk of nuclear proliferation and use is growing in light of the reemergence of great power conflict, shifting nuclear postures, and the development of new technologies. Collectively, these changes are poised to undermine the stability of nuclear deterrence and increase escalation risks. Nuclear war may not only kill hundreds of millions of people directly, but billions of people indirectly due to subsequent uncertain effects on agriculture. The exchange of hundreds of nuclear warheads may, at its worst, imperil humanity’s existence.

The United States once again needs to make nuclear security a policy priority, in dialogue with both its allies and geopolitical competitors. Together, we must develop new approaches, both technological and political, to minimize the risk of nuclear use—whether deliberate, or by accident and miscalculation. 

Great Power Competition

Three decades after the Cold War, policy analysts proclaim a return to great power competition. The 2022 National Defense Strategy characterizes China as the United States’ “most consequential strategic competitor” and Russia as an “acute threat.” While those nations are a threat to our values and security, efforts to defend against those threats can pose significant threats, too, by making arms arms races and escalation more likely as well as by thwarting international cooperation on collective security issues. As such, great power competition not only makes the risk of direct catastrophes more likely but also indirectly exacerbates catastrophic threats.

The national security community should identify areas of mutual concern to adversaries and the international systems, norms, and practices needed for security in those domains. In setting foreign policy, policymakers should strive to maintain cooperation on those issues even as they compete in other areas.

Our Resources page provides further reading about each of these threats.