There have been warnings about the dangers of possible super AI for a long time. ChatGPT CEO Sam Altman also sees this and brings the suggestion of international surveillance back into play.
February 19, 2026, 3:55 p.mFebruary 19, 2026, 3:55 p.m
Against the background of warnings about so-called super AI that could pose a danger to humanity, OpenAI boss Sam Altman has once again suggested the formation of a global regulatory authority modeled on the Atomic Energy Agency (IAEA). It is obvious that rules and security measures are urgently needed, he said at the AI summit in New Delhi. OpenAI’s best-known product is the AI ChatGPT.
ChatGPT boss Sam Altman speaks at the AI Summit in New Delhi.Image: keystone
In recent months, warnings about the potentially destructive power of AI have become louder again if the technology improves at ever faster paces through its own integration into programming and becomes a kind of superintelligence.
Only a few years until super AI?
Early versions of such intelligence may be just a few years away, Altman said. “If we are correct in our assessment, by the end of 2028 a larger share of the world’s intellectual capacity could lie within data centers than outside.” This should be taken seriously. He made it clear that AI would fundamentally change the job market.
Altman pointed to other dangers and uncertainties: possible super-AI in the hands of dictators, misuse as a bioweapon with the creation of entirely new pathogens or new types of wars. There needs to be a debate across society “before we are all surprised”.
More about artificial intelligence: