Notat

Norwegian companies should set the AI standard

Aksel Braanen Sterri
Jacob Wulff Wold

Artificial intelligence (“AI”) has great potential, but the technology and management systems to use the technology are immature. This article provides public and private companies with an overview of the central parts of the risk picture in the face of artificial intelligence and what can be done to address relevant risk factors. This is how Norwegian companies can set the standard for responsible use of AI.

Download

Ki-generated illustration from Midjourney

Main moments

! 1

! 2

! 3

! 4

Content

Confident Artificial Intelligence

The development of AI can, at best, massively increase the productivity of a business and, at worst, undermine the basis of the business's existence. Five questions are central to identifying the primary choices a business faces when dealing with IT and what aspects the business should consider in this regard:

1. What needs does the business have?

2. How can they meet with the help of AI?

3. What are the risks associated with using and not using AI?

4. What are the sources of risk?

5. What should businesses do to manage the risks?

To answer these questions, the think tank Langand Ernst & Young Advokatfirma AS have launched a “Standard for Trustworthy Artificial Intelligence” (the “Standard”) for both private and public companies, with practical advice for responsible development in all parts of the lifecycle of AI solutions.! 1

We developed this Standard to remedy a problem with existing frameworks. These are often very general and raise more questions than they answer.! 2 The long-term and EY standard for confidence-building IT is therefore designed as a practical guide for businesses considering adopting IT.

In this article, we describe how Norwegian companies, through seven principles for responsible use of AI and four overarching management principles, can facilitate trust-building use of AI. The article builds on the work of the Standard, with references to external sources.! 3

Norway has the prerequisites to be a leader in responsible AI development. We must keep up with the development of AI, but on our own terms and with Norwegian values. Trust is a mainstay of Norwegian social life and an important factor in the development and use of AI. If Norwegian companies maintain and manage this trust by identifying and implementing measures to limit risks associated with the development and use of AI, trust could be a competitive advantage for Norwegian businesses.

A systematic approach to AI and risk will make it easier for businesses to meet regulatory requirements, promote responsible and confidence-building innovation and enable businesses to better manage risks at IT in the face of future challenges.

The article is structured as follows: In the next part we explain the two main forms of AI and strategic considerations when using AI. In the “Five Forms of Risk” section, we present five types of risk: legal risk, ethical risk, value risk, task risk and disruptive risk and explain the relationship between them. We go into more detail on three of these: legal risks and in particular the EU's AI Regulation; ethical risk, with seven principles for responsible AI; and task risk, with a discussion of how good data processing and management practices can increase the precision and relevance of the AI systems. In the section “How should businesses manage risk?” we explain how businesses should live with and mitigate risk in line with an expected-benefit framework. In the final section, “Four Principles for Trustworthy Artificial Intelligence,” we present four principles businesses can follow when developing and using AI to ensure confidence-building AI.

Strategic considerations in the use of AI

Today, one often distinguishes between two types of AI:

Generative AI is probably best known through OpenAI's ChatGPT and derives part of its name from the technology's ability to generate new text, new images, audio and videos. The most general generative models increasingly also constitute a new infrastructure, called foundation models. Basic models have good general skills, which can then be specialized as needed.

Narrow AI is adapted to a narrow range of applications, where it is usually very good. For example, such systems can be used for imaging, playing games, or calculating the 3D shapes of proteins from a genetic code.

Both types of AI are developed by machine learning. Before, smart rules were programmed into the AI system. Today, developers allow the model to develop its own rules by rewarding it for producing the results the developers want during training. To train such models, data are needed on which the model can train and the computing power to perform the calculation operations for the training.! 4

Regardless of the type of AI a company is considering developing or adopting, Norwegian companies should have a strategic relationship with why and how AI can be used. The overall most important business to consider in the face of artificial intelligence are:

1. What specific needs can the company satisfy by developing or adopting AI? For example, automating existing work tasks or using AI to solve problems that you have not been able to solve with existing skills and manpower.

2. What need does the business have to build up expertise with the help of IT, which in the long run will help satisfy more concrete needs?

3. What need does the business have to adapt its business model or business strategy to the technological development?

The use of AI must be adapted to the individual business and the tasks it performs. This needs to be assessed on an ongoing basis according to which IT solutions exist and how IT changes the market or sector in which it operates. Since technology and the supply of new models are changing rapidly, the company should have an overview of the current situation and a plan for different scenarios in the future. It is important that the business takes a long-term and overall strategic perspective. The concrete AI models are evolving rapidly and an in-depth assessment of existing models and needs can quickly become outdated. Businesses that have the ability to think ahead in time and develop strategies for different possible scenarios will be better equipped to face the development of AI.

Five Types of Risk

AI can satisfy the needs of the business, but it also carries risks. Risk is the product of the damage potential of an adverse event and the probability of that event occurring. The greater the likelihood of the injury occurring and the more severe the injury, the greater the risk.

Both using AI and not using AI carry risks. Use may entail the risk of violating, for example, legal requirements, but AI can also help the company to comply with laws and regulations. In addition, AI can improve the company's services and products. Avoiding the use of AI may entail as great risks for the business as irresponsible use of AI. At the same time, not adopting AI will entail the risk of being outcompeted or neglecting the skills building required to be competitive. Risk must therefore be balanced against the expected benefit.

Overall, the risk picture can be grouped into five forms of risk faced by the business:

1. Legal risk: risk of breach of legal obligations, both current and future, with civil or criminal sanctions if the business breaches its obligations.

2. Ethical risk: risk that one acts in violation of ethical norms.

3. Value risk: the risk that the business does not behave in line with the company's values.

4. Task risk: the risk that the business does not satisfactorily solve the tasks of the enterprise.

5. Disruption risk: the risk that technological and societal changes undermine the basis of business existence.

These risk types are not mutually exclusive, but can overlap. Legal obligations are rooted in ethical norms. Violations of these would therefore also be violations of ethical norms. But businesses can also violate ethical norms without violating legal obligations. The company's values will also overlap with legal and ethical norms, but there may be specific features of the business and its value base that require a special assessment.

Taken together and separately, these business risks can threaten the ability of the business to achieve its objectives or exercise its functions.! 5 Exposing yourself to different types of risks can also cause damage to the reputation of the business. The business should have a conscious relationship with the five risks when implementing AI.! 6 It carries too far to go into all the groups, so we will touch on legal risk, ethical risk and task risk in the following.

Legal risk and EU risk categories

A major source of legal risk is acting contrary to the EU's new regulations for AI. The EU AI Regulation will apply in the EU from August 1st 2024, with the implementation of rules for the various risk categories over the next two years. The Norwegian government has stated that it wants to follow the EU's implementation plan.

The EU's AI Regulation regulates AI according to risks, roles and scope of application. The company must have an active relationship with what role it has in different situations where AI is involved and at the same time be able to categorize the use of AI up to the different risk categories.

The risk categories range from unacceptable risk, high risk, limited risk to minimal risk. The different categories require different regulations, from prohibitions to legal under strict, limited and few or no conditions. see Table 1 for more detailed information on the different categories and forms of regulation.

Kolonne 1: Uakseptabel risiko; Høyrisiko; Begrenset risiko; Minimal risiko. Kolonne to: beskriver regulering, mens kolonne gir en beskrivelse og kolonne fire gir eksempel på KI-systemer
Table 1. Overview of the AI Regulation

Obligations under the AI Regulation depend on the role of the business in the development or use of AI. The regulation distinguishes between developer (provider), distributor (distributor), importer (import) and user (deployer).! 7 Most Norwegian companies will use AI as a user. The requirements for users will depend on the risk category of the AI system. Requirements for users of high risk AI systems include:

Assessment of the impact on fundamental rights prior to the implementation of the AI system, if the user:! 8

 ◦ is a public body or private entity that provides public services.

provide essential private services, including creditworthiness assessment, risk assessment and pricing of life and health insurance.

Human supervision of persons with appropriate training and skills.

Ensuring that prompts and other input data are relevant to the use of the system.

Verification that the AI system complies with the AI Regulation and that all relevant documentation is available.

The requirements show that it is not only developers of AI systems that receive significant commitments to the functioning of the AI system. It also applies to companies that adopt AI and includes special requirements for training in the AI systems and documentation of how the system complies with the AI Regulation.

Although the AI regulation has not been implemented yet, AI does not operate in a legal vacuum today. When assessing legal risk, businesses must also look to technology-neutral legislation.! 9 Companies must thus have a good understanding of the legal requirements that are imposed on the development and use of AI. They should have a strategic relationship with what the AI will be used for in order to make a cost-benefit assessment against the expected requirements.

Besides existing or future legislation, it is useful for the business to examine whether there are relevant standards that may be useful to either certify its own model or to check certification of a model the business plans to use.! 10 The company itself must assess what is relevant for its development or use of IT, such as industry, geographical location, risk profile and level of knowledge.

Ethical risk and principles for responsible use of AI

In order to prevent ethical risk, the business should act in line with generally accepted principles of ethical and responsible use of AI. There is, in part, a great overlap between the ethical and the legal. The rules of law gladly rest on ethical considerations, such as justice, but the overlap is not perfect; not everything that is immoral is illegal and not everything that is illegal is immoral.! 11 After reviewing the research literature and existing principles of ethical and responsible AI, we have prepared a separate list consisting of seven principles for responsible AI.

1. Rettferdighet 2. Forhindre skade 3. Selvbestemmelse 4. Privatlivets vern 5. Gode begrunnelser 6. Forståelse 7. Ansvar. Med begrunnelse for hver av prinsippene
Table 2. List of Seven Principles of Responsible AI

The seven ethical principles are intended to provide guidance to businesses that want to develop and adopt AI. They are not necessarily exhaustive. The application of the principles requires a user's judgment. A widely discussed example is AI systems that can lead to discrimination and marginalization due to biases and weaknesses in data. Businesses have a duty to take reasonable steps to avoid their systems discriminating against groups. This is grounded in several of the ethical principles, including fairness, self-determination, good justifications and understanding (besides having a page to current discrimination laws). Since the use of AI systems can benefit business users and third parties, ethical risk should not scare businesses from using AI. Also, companies that do not use AI can act contrary to ethical considerations, and AI can at best help the business in reducing the risk of discrimination and other problematic discrimination.

Task risks and procedures for using AI

The use of AI comes with a task risk, i.e. a risk that the company does not solve the company's tasks satisfactorily. Whether the business is using the technology itself or offering AI solutions to others, it is important that it is clear to users what the AI solution can and cannot do.

Before adopting the AI model, the business should consider the opportunities and risks associated with its use. On that basis, one can assess how it is appropriate to use it and whether it should be used at all.

Good use of AI requires good control of data. If a trainer has his own AI model, it is essential that one ensures good training data. For example, if the data on which the model is trained is from a population other than the one to which the model is to be applied, the business runs the risk of imprecise or incorrect results from the model. An imprecise model will entail more errors, which may go beyond the ability of the enterprise to perform its core tasks.

Data that is not representative has the same potential to be used if the business is aware of the limitations. Expected differences between training and use should be mapped so that this can be taken into account in training, testing, use and communication of the model.

The choice of AI model is crucial. Therefore, in paragraph 2.6 of the Standard we establish a checklist for the selection of the AI model. Developers should be aware that KAI models introduce new vulnerabilities into the systems of the business. The safest way to ensure that an AI model behaves as desired is to better understand how it works: how does it make decisions and under what circumstances does it get it wrong? This can be achieved with good routines for testing and analyzing the model during and after training.

Task risk can be reduced by good routines. It is important that:

1. users have a good knowledge of what the AI models can and can't. Then there is less chance of users basing too much or too little on the model or using the model incorrectly.

2. business should educate users of the AI model on how to get the most out of it and avoid making mistakes, where users are taught how questions and context are presented in a way that reduces the risk of negative outcomes, such as discriminatory outcomes.

3. AI is only used where the benefits of application, such as increased efficiency, exceed the downsides, such as the costs to people, society and the environment.

4. the business establishes mechanisms to control that the AI is used in the way they want.

How should businesses manage risk?

Risk is something businesses have to live with. Zero risk is not practically possible, either in business, the public sector or in society at large. Risks related to the development or use of AI must therefore not be eliminated, but managed prudently.! 14

Owners, employees, creditors, customers and the rest of society will expect the business to take reasonable steps to reduce the risk of various events occurring and to manage the situation should the risk occur.

What are considered reasonable measures to manage risk will largely depend on a cost-benefit analysis: How much will a measure reduce risk, at what cost to business and society? Some measures cost more than it tastes in relation to the reduction in risk achieved. As a rule, such disproportionate measures should not be implemented. Among proportional measures, the most effective should be adopted.

In the face of risk, it is therefore essential that the business:

1. Obtains an overview of and information about risks and the sources of risk! 15

2. Acquire relevant skills to manage risk

3. Benefits responsibility for managing AI and measures

4. Implement measures to manage or minimize risk

5. Identifies weaknesses or deficiencies in the measures, evaluates and ensures continuous improvement in line with application and technological development

6. Put in place good governance systems for 1, 4 and 5 (governance) that reduce risk and create trust among owners, users, employees and society (which in turn reduces risk)

7. Constantly monitors the management systems of the enterprise.

In view of the rapid technological development, the company must ensure a continuous overview of how AI affects the company and the extent to which the various risks and sources of risk materialize. Such a procedure will be able to provide a responsible use of the AI.

Four Principles of Trustworthy Artificial Intelligence

If companies use IT responsibly, it will benefit the companies themselves, as well as users and society at large. It will also ensure that society's high level of trust in business and the public sector is maintained in the face of a disruptive technology. To ensure reliable artificial intelligence, we propose four principles businesses should follow when developing and using AI:

1. Purposeful: Make sure to use AI to achieve overall business goals and that AI is only used where the benefits of its use exceed the disadvantages for business and society at large.

2. Risk management: The business should identify risks and have a systematic approach to reducing risk in line with the expected benefit.

3. Considerations: Take ethical and legal considerations into account in the company's development and use of IT and ensure that its use is in line with regulations, industry standards and reasonable expectations from users and society.

4. Holistic: Ensure that all functions of the business and all users get to experience the positive effects of AI.

Download