Kronikk

Oil fund should set the AI standard

Aksel Braanen Sterri
Eirik Mofoss
First published in:
DN

The Oil Fund can ensure more responsible AI development.

Download

AI-generated illustration from Midjourney

Main moments

! 1

! 2

! 3

! 4

Content

Artificial intelligence is the technology that, for better or for worse, can change everything. Through the Oil Fund, Norway is a significant owner of the companies driving the development, and can help steer the technology in the right direction. We should seize that opportunity.

The Oil Fund conducts responsible management in several areas, particularly in climates. This has both an ethical and an economic side. The fund will not own or be exposed to companies that violate basic ethical norms, for example by unacceptably high greenhouse gas emissions.

The oil fund also has a pure economic interest in the world's development going well. If a company makes money from something that has negative ripple effects for others, it will hurt the fund's overall values in the long run. An important part of the Oil Fund's work is therefore to help listed companies operate responsibly and in the long term.

The same considerations should be made for the development of AI. There is a danger of AI systems being developed and launched before they are secured against mishaps and abuse. The most powerful and risky AI systems could cause societal accidents affecting the entire world economy. Investing indiscriminately in such companies is not only morally indefensible, it is also bad store for a fund with a global portfolio. Ethical AI policy is good store.

These considerations may explain why the Oil Fund in August 2023 presented its views on responsible use of AI. It says the fund believes “responsible development and use of artificial intelligence (AI) will be important for well-functioning markets and legitimate products and services - and that this could affect the financial return to the fund over time.”

However, this view is too general to serve as real management of the portfolio. We need clearer guidelines, equivalent to the Oil Fund's offensive Climate Action Plan, which companies can relate to in order to use AI responsibly. In managing the companies in which it is invested, the Oil Fund should:

  • Introduce measurable criteria for new AI products, including failure rate and discrimination against groups, including how the products may harm the overall value of the Oil Fund. The Fund should require reporting in accordance with these.
  • Ensure that companies understand concrete measures and introduces governance procedures to meet the objectives, including ensuring that companies are transparent about their own risk assessments.
  • Contribute to standards that further develop goals and practices for responsible development.

A more offensive corporate governance for responsible AI might make up for the fact that lawmakers struggle to keep up with the development of a technology that changes on a monthly basis. In addition, it is one of the few available instruments for a small country that stands far away from the technology front and outside the European Union.

Norway's stake is not enough to forcing companies to act responsibly. But good reputation and clear communication make the Oil Fund's voice last longer than the ownership would suggest.

The fund's good reputation is precisely what makes Torbjørn Røe Isaksen warn against attempts to turn the Oil Fund into a political tool for all good purposes, such as ending the invasion of Gaza. But responsible corporate governance is already part of the fund's core business. Applying ordinary management principles to AI is thus business as usual, not a dangerous politicization of the fund.

Clearer governance is also in line with the stated interests of the AI companies. Several of the major AI labs have committed to a “responsible” AI development. The Fund can ensure that the commitments become real by setting clear requirements and reporting on what companies are doing to follow them up. It makes it easier for ethically oriented developers to compete on equal terms.

We do not have a full understanding of all the risks that the development of AI entails. Nevertheless, there is growing consensus around what constitutes good KA practice. There are still new ones coming guidelines, standards and concretizing legislation for responsible AI development.

FN adopted recently a resolution to promote guidelines for “safe, secure and reliable” artificial intelligence. On the occasion of the AI Seoul Summit in May, the first international (mid-term) report on security from advanced AI published.

The Fund has the expertise and knowledge base they need to start a clearer management of AI companies. Everyone is served by better AI practices, and the Oil Fund can set the standard.

Download