- Open AI’s CEO Sam Altman said he was scared in a recent interview by examining GPT5
- They compared the GPT5 to the Manhattan Project
- He warned that the rapid growth of AI is taking place without a considerable surveillance
Open AI chief Sam Altman has painted a portrait of GPT -5 that reads more than a product launch. In a recent event this weekend with Theovan Podcast, he described the model’s experience of testing in a breathing tone that gives rise to more doubts, which he wants to listen to the listeners.
The Altman said the GPT -5 “feels very fast”, while mentioning the moments when it feels very nervous. Despite being a stimulus behind the development of GPT5, Altman claimed that during some sessions, he looked at the GPT-5 and compared it to the Manhattan project.
The reversal also issued a crime of the current AI governance, which suggested that “there is no adult in the room” and the surveillance structures are left behind by the development of AI. This is a strange way to sell products promising to make a serious jump in artificial general intelligence. Increasing the potential risks is one thing, but working in such a way that it has no control over how the GPT5 feels somewhat unpleasant.
You can like
Open CEO Sam Altman: “It feels very fast.” – “I got scared during the GPT5 test” – “Looking at him: What did we do … like in the Manhattan Project” – “There is no adult in the room”
Analysis: Exposed GPT-5 Fear
What Ultman made clear is not completely clear. The reversal did not go into technical details. Demanding the Manhattan Project is another over the top type. The indigenous and potentially destructive change and global stake indication seems strange as a sophisticated auto full competition. Saying that they have created something they do not fully understand, the opening seems to be either careless or incompetent.
It is understood that the GPT5 will be exposed soon, and it is being indicated that it will spread far more than the capabilities of GPT4. The “digital mind” described in the comments of Altman can really represent how AI makers consider their work, but such a Christian or apocalyptic projection seems stupid. The public dialogue around the AI is mostly to the toggle between the hope of the breath and the existential fear, but in the middle it seems more appropriate.
This is not the first time the Altman has publicly acknowledged his suffering from the AI arms race. He has been on the record, saying that the AI “can be wrong enough”, and the opener will have to work responsibly while sending useful products. But although the GPT5 will almost certainly reach with the better tools, the friendlyer interface, and a little snapper logo, the basic question it has raised is about strength.
The next generation of AI, if it is fast, smart and more intuitive, will be entrusted with even more responsibility. And it would be a bad idea based on the comments of the opposite. And even if he is exaggerated, I do not know that it is a company that should decide how this power has been deployed.


