What if Trump obtains super-intelligent AI?
No one wants China to win the AI race. But how confident are we if Donald Trump's United States comes first to the finish line?

Ki-generated illustration from Sora
Main moments
According to a new memo, called AI 2027, the leading KI labs could create super-intelligent KI systems within the next few years. This is in line with what Sam Altman and Dario Amodei, the leaders of the top KI companies, promise. But where Altman and Amodei believe super-intelligent AI systems will be able to improve humanity, AI 2027 tells a different story.
If the labs manage to control the technology, it will give Donald Trump, Xi Jinping or both access to the most powerful tools in history. More likely, the authors believe, companies are creating a player they not stuff to control -- which will lead to the annihilation of humanity.
The predictions sound like science fiction, but the authors aren't just anyone. Eli Lifland have the most accurate the predictions among all participants in RANDS “Forecasting Initiative”. Daniel Kokotajlo previously worked at the KI company OpenAI. His predictions about the development of KI from 2021 have proved frighteningly precise. Read his blog post “What 2026 looks like” from 2021, then you can judge for yourself.
2021 was the year before ChatGPT revolutionized our understanding of language models and the ability of AI systems to communicate like humans. Things haven't stood still since ChatGPT launched on the eve of 2022, and became the fastest growing application in history in January 2023 with 100 million active users.
Now the KI models can create completely realistic video clips, generate images as if they were professional photographers or artists, write master's theses that stand to A and do better on tests than people with PhDs in the subjects.
New and better models are launched faster than we can get to know the existing ones. Most of us can't keep up. The authors of AI 2027, on the other hand, try to look into the future, where super-intelligent AI systems take over control.
It's hard to believe that KI systems should become as intelligent as the authors think. Despite the fact that the chatbots are already smarter than the vast majority of people, they have to little extent changed the world of work or how we run society.
However, it has its natural explanation. Their decisions are often difficult to understand. In jobs where it is important to justify the decisions, lack of transparency puts brakes on usage. There is also uncertainty about how the data we provide to the KI models is handled. We cannot integrate AI systems into important societal functions without being sure that sensitive data does not go astray.
However, the main reason that the KI systems have not changed society is that the models are passive and dependent on continuous management. In order for workers to be replaced with machines on a large scale, the machines must be able to operate on their own. Today's models can't.
However, this is about to change. The last cry of the KI companies is simple “KI actors”, that is, AI systems that can design tasks on their own. The best models can perform tasks which extends over an hour. And the length of tasks the systems can perform, doubles every seven months. Continuing its development, the KI actors will, within a few years, perform complicated tasks over several weeks and after each month.
Add this together with another development, namely that the models become getting better as a result of more exercise and by the fact that the systems Let's think before they answer.. Anyone who allows himself to look along the lines and into the future will understand that the world will be able to look quite different in a few years.
Much excitement is attached to whether AI systems will become as smart as humans at a wide range of tasks, i.e. become generally intelligent. But there is no reason to expect that a system that becomes intelligent at the human level will stop there.
An important part of the narrative in AI 2027, is that an AI system that reaches a certain level of intelligence can be used to improve itself and thus become more intelligent than humans; they can become super intelligent.
AI systems improving themselves are not science fiction. Take, for example, the game Go, a very complicated game that takes decades for humans to master. Using 'reinforcement learning', DeepMind's AlphaZero could train yourself to become the best Go player in the world in a matter of days.
If the best AI systems start to improve themselves, we will find ourselves in a difficult situation. The first question is how we can control a system that is smarter than ourselves. We cannot know if the systems have actually internalized our values, or if they are just pretending. Already today's models shows signs of deceiving us.
If the systems trick us and become super-intelligent, it will only be a matter of time before they outmaneuver us. This is the path to the annihilation of mankind, we are to believe the authors of AI 2027.
It is therefore essential that companies succeed in controlling the AI systems they create. But, as the authors describe, it is not obvious who, in that case, should control the systems.
Today, it is the Americans and the Chinese who are leading the race. No one wants China to come first. But how safe are we if it is rather Trump's America that first gains control of what will become humanity's most powerful weapon?
Today, there is no happy solution to super-intelligent systems. The safest thing we can do is to try to take away from the corporations the power to create such systems without control from the world community. But these days, who puts their money on the world community managing to buzz in?