Kronikk

Are our politicians AI skeptics?

Peder Skjelbred
Aksel Braanen Sterri
First published in:
Aftenposten

Either they ignore the challenges, or they ridicule them.

Download

Ki-generated illustration from Midjourney

Main moments

! 1

! 2

! 3

! 4

Content

Norwegian politicians sound like climate sceptics when they talk about artificial intelligence (AI). When will politicians wake up to the reality of the development of AI? The AI policy of the 2020s will be seen as the climate policy of the 1970s.

On the eve of last week, OpenAI released the latest version of ChatGPT. AI researcher Morten Goodwin tells Aftenposten that he is stunned at the progress it has been doing since GPT4 came along in 2023.

However, the improvement is entirely in line with the pace of development we have seen in recent years. What is to be stunned by is the reaction of politicians. Norwegian politicians sound like climate sceptics when they talk about AI. Either they ignore the challenges, or they ridicule them.

We must learn from our climate mistakes if we are to have any hope of steering AI in the right direction.

Experts have long warned that we need to take a grip on technology developments. One of those who has spoken in the clearest terms is Stuart Russell. Russell is a professor of computer science at UC Berkeley and has literally written the textbook on AI. A few weeks ago, he visited Oslo and warned Prime Minister Jonas Gahr Støre (AP) and a packed university hall about a development that is on a fast track in a crazy direction.

We know how to make more and more capable systems, but we have no science for controllable AI,” Russell points out.

We do not understand, in other words, how to ensure that these systems act in line with our values. Yet companies are spending billions of dollars to reach the goal of creating increasingly intelligent and autonomous systems as quickly as possible.

Russell and a number of the world's leading AI scientists are worried in order that it can end in disaster If the companies succeed.

When digitalisation minister Karianne Tung (Ap) in an interview Confronted with Russell's warnings, she says she usually asks those concerned about AI, “if they're afraid of their robot vacuum cleaner, because it has AI in it”.

This must be Norwegian politics's “snowball moment,” after Republican politician James Inhofe, who brought a snowball into the Senate to refute the theory of global warming.

The scientific consensus is indeed greater in the field of climate than on AI. A number of AI experts believe that the major threats posed by KI are exaggerated. It may yet seem that many of those closest to the research front are clearest in their warnings.

Disagreement among scientists should also not be dealt with by trusting them with the most pleasant predictions. The UN shows how it can be done. In an effort to systematize the knowledge surrounding AI development, the United Nations recently proposed to set up an IPCC for AI.

For AI as for climate, it is important to get on the track early. For a long time, climate scientists spoke to deaf ears about the consequences of CO2 emissions on global warming. Now we're paying the price for the parent and grandparent generation not listening.

We should learn from our mistakes. When a technological paradigm sticks, it takes a lot to change it. If we create super-intelligent systems, we may never regain control. Early regulation, on the other hand, can steer technology development in the right direction, so that we get systems that help people, not harm them.

Some are deterred from action because of the great uncertainty associated with technological developments. But uncertainty about where development is going is a poor argument for sticking your head in the sand.

It is not the most likely climate scenarios that cause the greatest cause for concern. It's the low but not negligible probability scenarios that will really hurt us if they materialize.

The same goes for AI. If there is even a small danger that AI development will go disastrously wrong, it requires us to take more action to prevent it from going wrong. Uncertainty is therefore not an argument for action paralysis, but for acting in a way that is robust to multiple scenarios. While it is unlikely that Russia will invade Norway in the next ten years, we have a plan for that and we are actively working against it both materially and diplomatically.

However, you don't have to be worried about the Russell scenario of runaway AI to think that the government should take AI development more seriously.

In 2023, thousands of AI researchers asked about what year number it is “as likely as not” that we have AI systems that can do any conceivable job better and cheaper than humans. They responded in 2047. This is under 25 more years and about when we reach 2 degrees of warming given Current climate policy.

And while a lot of political resources are put into preparing for the climate of the future, the technology of the future is, so to speak, completely out of the government's focus.

Perhaps the explanation for the nonchalance of Norwegian politicians lies in the fact that the EU has solved all AI-related problems with its new AI regulation? However, it is naive to believe. Although the AI regulation is an important step forward, the legislation has several gaps.

The regulation does not cover AI of military or national security interest, AI models specialized in biology (which many of the experts are most concerned about) or non-commercial models. It is also not adapted to the systems KICs are currently working on, namely AI systems that can operate on their own.

Russell does not believe that Norway can do much to influence development. However, we do not believe that Norway is so irrelevant.

We need to start participating in the international arenas where the broad outlines of the future AI policy are discussed. It is a small step in the right direction that Norway has joined The European Artificial Intelligence Board, coordinating the implementation of the AI Act. But we must step up and work to join the international meetings, similar to those held in UK and South Korea The last year without Norwegian participation.

In addition, Norway owns a significant part of the companies behind the development, through the Oil Fund. It gives us an opportunity to to exercise active ownership and develop safe practices for the development of AI.

We can also take the initiative to address the EU's new ambition of a CERN for AI becomes a center for controllable AI and not only contributes to further acceleration in the development of AI.

Without clearer action, the AI policy of the 2020s will be seen as the climate policy of the 1970s. If we are to have any hope of controlling the future, we need to take action now.

Download