Kronikk

Don't stick your head in the sand

Jacob Wulff Wold
Aksel Braanen Sterri
First published in:

Jacob Wulff Wold and Aksel Braanen Sterri respond to criticism from Kjetil Rommetveit and Ragnar Fjelland.

Download

AI-generated illustration from Midjourney

Main moments

! 1

! 2

! 3

! 4

Content

In Klassekampen Thursday, May 2, Kjetil Rommetveit and Ragnar Fjelland from the University of Bergen criticize us for discussing artificial general intelligence (AGI), that is, AI that can do as well as all work tasks at least as well as humans.

They believe, for one, that AGI is so far away from materializing that discussion of the technology, even if we are critical, leads to an excessive belief that AI will be better than the alternatives it replaces. It also takes oxygen away from “real challenges here and now.”

The premise of The analysis of the two scientific theorists is that “today we are not close to realizing artificial human-like intelligence (AGI), and that there are strong scientific arguments that it will never be possible”. Among AI experts, this is a peripheral position. A lot of people think AGI will come within the next few decades. Some think we're getting AGI already this the decade. Some think it's a long way off, but only one percent of experts believe it will never happen.

Nor is it that a focus on tomorrow's problems prevents us from solving today's. In many cases, looking ahead can make it easier to solve the problems we have today. Since technology is rapidly changing, we must try to advance where it is headed if we as a society are to manage and manage it. And today's and tomorrow's AI challenges have largely similar causes and solutions.

Rommetveit and Fjelland thinks the AGI term is part of an ideological hinterland of transhumanism and eugenics (!) , and that we should therefore avoid discussing the term. Even if this description had been correct, they draw the wrong inference. If we're concerned that the tech billionaires driving AI development are driven by the wrong motives, the solution needs to be more analyses of AGI, not fewer.

We know that the most powerful tech companies in the world are trying to make AGI. Whether they succeed we don't know. It also depends on choices we make today, which in turn rest on what discussions we have. We would rather risk discussing something that turns out not to materialize than stand unprepared should it occur. That's probably wiser than giving up their agenda power to the tech billionaires of Silicon Valley, as Rommetveit and Fjelland seem to want.

Download