Existential threats

An existential threat is a possible event that can cause enormous, permanent devastation on a global scale.

Man-made threats include extreme climate change, nuclear war, synthetic viruses and advanced artificial intelligence, among others. Asteroid impacts, natural pandemics and eruptions from supervolcanoes, on the other hand, are natural threats. It is estimated to be clearly the highest risk from man-made threats.

Fortunately, we can both reducing likelihood in order for existential events to occur, and we can strengthen our resilience in case the worst should happen. Examples of measures to mitigate threats include strengthening international agreements on nuclear disarmament, biotechnological development and artificial intelligence. A concrete example of measures that increase resilience is the global seed vault in Svalbard.

Relative to the scale of the risks we face, today there are both too few and too poor measures in place against existential threats. This applies to threats that have been around for a long time, such as climate change and nuclear weapons, despite the fact that these are relatively high on the political agenda. But it applies even more widely to recent threats from developments in biotechnology and artificial intelligence.

Biological threats and artificial intelligence (AI) are both very serious and neglected in today's politics. This combination makes it extra important to pay more attention to these threats.

Pandemics

The COVID-19 pandemic has already cost millions of people their lives and inflicted thousands of billions in costs on society. SARS-CoV-2 is nevertheless by no means the worst of possible pandemic viruses. Technological advances in biotechnology in recent decades have made it both easier and cheaper to produce extremely dangerous viruses. This, in addition to the fact that advanced biotechnology has become far more accessible, has increased the risk of an existential biological catastrophe.

Fortunately, we can take action both to prevent and prepare us on new pandemics. This we can do, among other things, by improving regulation of the purchase of synthetic DNA, strengthening surveillance of viruses, and ensuring mechanisms for rapid vaccine development.

Artificial Intelligence

Advances in artificial intelligence, such as with ChatGPT and GPT-4, have seriously captured people's attention in the past year. Increasingly advanced artificial intelligence can solve major societal problems and improve our lives.

At the same time, there is reason to worry. We have no robust way to ensure that advanced AI systems act in line with our wishes, and are not used by malicious actors. These problems can be exacerbated by race between companies or countries.

Reducing risks from advanced artificial intelligence should be much higher on the political agenda than it does today. We need increased investment in research into security and stricter international and national regulation of companies developing artificial intelligence, as well as support for environments exploring less risky forms of artificial intelligence.