Kronikk

Dumbing down the debate on KI threats

Aksel Braanen Sterri
Eirik Mofoss
First published in:
Dagens Næringsliv

We need more humility from leading voices.

Download

AI-generated illustration from Midjourney

Main moments

! 1

! 2

! 3

! 4

Content

Could we have super-intelligent AI systems by 2030? According to the head of OpenAI, Sam Altman. In a chronicle in DN on November 4, Inga Strümke and Anders Løland attempt to dismiss the claim as dangerous hype.

Their argument is that senior leaders in the AI industry are deliberately inflating the possibility that we will get a super-intelligent AI, so-called AGI, that can either save or destroy humanity. This is how they deliberately distract us from more important and pressing problems today, such as poverty, increased surveillance and manipulation.

That Altman has the desire to hype his product is easy to understand. But we shouldn't commit the fallacy of thinking that just because Altman profits from saying something, that what he says must be wrong.

Even if one were to distrust Altman, it is hard to explain that leading scholars such as Stuart Russell and Geoffrey Hinton also warn of existential threats from AGI. Hinton, who recently won the Nobel Prize in physics, even quit his highly lucrative and well-paid position at Google to be free to warn about these threats.

Strümke and Løland are also guilty of setting up a false contradiction between addressing current and future problems. Ignoring AGI risks because we have pressing problems now is like ditching investments in pandemic preparedness because people need hip surgeries.

Spending scarce resources on combating hazards that may not occur but have catastrophic consequences if they do occur is not popular. But investing in the prevention and prevention of problems is cheaper than repairing afterwards.

Strümke and Løland, however, have a more substantive critique of the idea of superintelligence. They mean we don't even have a clear definition of intelligence. Without it, we don't know if “intelligence is a magnitude that can be maximized”, much less whether we will be able to create super-intelligent systems.

But this philosophical objection doesn't hit. AI is already super-intelligent in narrow areas, such as chess and Go, in the sense that the consistent ones beat the world's best people. The major language models even show sparks of general intelligence, in the sense that they deliver competent results in a whole range of areas. And despite not knowing whether intelligence “can be maximized,” we do know that GPT-4 is much smarter than GPT-2.

We don't know if the current AI paradigm will give us super-intelligent systems, but given the enormous development of recent years and the clear verdict of the AI experts that we will be creating AGI before 2050, it lacks the academic hold to say that the idea is meaningless.

When AI models become good enough that they can solve complex tasks over time, we risk facing enormous societal changes. What jobs will be lost? How will it affect the economy? How do we ensure that such systems remain safe and controllable? These aren't sci-fi questions -- these are entirely real challenges that require planning now.

We need more humility from leading voices in the AI debate, rather than rowdy rejections tufted on weak reasoning.

Download