Utspill

Consensus on AI Act - France, Germany and Italy give in

Jacob Wulff Wold
First published in:

In December, EU policymakers reached a political consensus on the main contentious issues of the AI Act (KI Regulation), the EU's flagship for regulating artificial intelligence. After a month of technical clarifications, formal agreement was reached on Friday.

Download

Ki-generated illustration from Midjourney

Main moments

! 1

! 2

! 3

! 4

Content

In December, EU policymakers reached a political consensus on the main contentious issues of the AI Act (KI Regulation), the EU's flagship for regulating artificial intelligence. After a month of technical clarification, it was formal agreement on Friday.

France, Germany and Italy, have worked against the legislation all the way, after lobbying pressure from its leading AI company. They wanted more lenient rules for powerful AI models. After growing resistance both internally and from the other member states, Germany reversed itself last week, with Italy and France eventually seeing themselves having to follow suit. The big tech companies may want to keep looking for opportunities to water down legislation until it is finally adopted by the EU parliament in April, but in this half they will have to face defeat.

- A victory for both security and innovation for artificial intelligence in Europe. With higher requirements for those who make the most powerful models, there will be a greater regulatory burden on the largest players who have the best ability and resources to mitigate risk. At the same time, it will be easier for both small and large players to adopt the technology in a safe way,” says Jacob Wulff Wold, advisor at Long Term, about the development.

- In the future, it will be interesting to see how the legislation will work in practice. Much is high-level principles more than concrete requirements.

IN The AI Act KI systems are classified basically by area of application and fall into four categories: unacceptable, high, limited and minimal risk. Regulation should be in line with the risk, and systems with unacceptable risk are therefore prohibited.


(figure from Technology Council)

The four areas had at first been defined only by the area of application, but with the emergence of general models such as the one in ChatGPT, this had to be updated. All general-purpose models must document how they are made, and the most powerful general systems are classified as posing a systemic risk, with similar requirements as high-risk applications.

It is a long time before the legislation comes into force in Norway, but in Europe, areas with unacceptable risk will be banned already during the year, and requirements for new AI models will become effective in spring 2025.

Download