Kronikk

AGI is not futurism

Jacob Wulff Wold
Aksel Braanen Sterri
First published in:
Klassekampen

The fact that the problems belong to the future does not mean that we should postpone the discussion of artificial general intelligence until we are in the middle of them.

Download

AI-generated illustration from Midjourney

Main moments

! 1

! 2

! 3

! 4

Content

Artificial general intelligence, that is, AI that can do as well as all work tasks at least as well as humans, is on the way. We are currently completely unprepared for what is to come, and our AI experts are making incongruous claims.

“We're still more confident that [AGI] is coming”, says AI researcher Morten Goodwin to Aftenposten, indicating that it could happen as early as the next ten years. AGI can “change humanity forever”, claims Inga Strümke. Still, AGI is a futuristic theme that belongs “most at home in Hollywood.” It should not be the subject of serious societal analyses, says Strümke on NRK Urix.

The trio of allegations don't hang on to grips. If AGI can be here within the decade and it can change humanity forever, we can't leave the challenge to Hollywood. We need a professionally grounded conversation and political action.

We already have a professional foundation to stand on. World-class AI researchers, such as Geoffrey Hinton, Joshua Bengio and Stuart Russell, have for several years been writing books, research articles and chronicles on political and technical steps we can take to ensure a safe future with AGI.

The fact that something belongs to the future, or that the future is uncertain, does not mean that we should postpone the discussion until we are in the middle of the problem. Today we are in the aftermath of solving the climate problem because we did not take the scientists' future scenarios seriously.

Now the UN Climate Panel and international agreements are standard references when developing climate policy. Similarly, we need the UN's AI panel, AI scenarios and international AI security objectives.

What is it that makes AGI so groundbreaking? One explanation has to do with AI being an alternative to labor. Since AI are computer programs, they can be copied and operated with minimal resource usage compared to humans.

Since AGI by definition can perform all tasks as well as humans, it will be able to provide approximately unlimited labor. Put this workforce to work on problems like climate change, disease or poverty, and it will be easy to envision AGI as the solution to all our problems.

“We lack climate solutions because we didn't take scientists' future scenarios seriously”

AGI also gives power to whoever controls the technology. Just as AGI can bring us new medicines and green technologies, it can also create new biological viruses, computer viruses and weapons of mass destruction.

AGI can also concentrate power. Today, even dictatorships rely on the tacit consent of the people. But AGI gives the elites a different power base than the people's manpower and ability to hold guns. If states become less dependent on their voters, even democratic societies can go in a totalitarian direction.

Another concern is that we create systems we don't understand. Researchers and developers have no robust methods to ensure that AI behaves as they please after the training phase. Many scientists are afraid in order that we might lose control of a AGI that has been developed with current methods. We risk starting a process we can't stop, and losing control of the future.

Today, our most powerful technology is being developed in a bunch of profit-maximizing businesses without significant regulation. We can do better than that.

First of all AI security needs to be ten notches higher on the agenda. In Norway, the government boasts that it has allocated one billion for AI research over 5 years. The UK has already spent more than a billion on AI security. Their newly established AI Safety Institute will be able to take advantage of the investment at 20 billion to publicly owned computing power.

Like climate change, AGI is an international challenge that requires international solutions. But where climate change is difficult to solve because blame and responsibility are distributed on the entire world's population, the development of powerful AI is concentrated in a handful of powerful actors.

A regulation of computing power can ensure that trusted actors have access to enough of the essential data pieces required to develop powerful AI, and that untrusted actors do not get it. Norway and the Nordic countries can be the driving force for such initiatives.

Regulation of AI development should be based on international research collaborations on AI security. One suggestion is to create a ”CERN” for secure AI, where research is conducted on how to create more secure systems, so that we can prepare for different scenarios.

Such research projects can also develop public AI models, so the commercial companies have to compete against alternatives that are made with societal benefit as the goal.

If it turns out to be too risky to let the tech companies create AGI, we can stifle their computing power so that the next generation of baseline models must be developed in a common international collaboration that prioritizes safety and societal benefit over speed and profit.

AI policy is still in a malleable phase, but Norway is standing sidelined. It is high time we started monitoring developments outside our borders.

Download