Anthropic CEO Dario Amody has reportedly said that artificial intelligence (AI) models are less deceitful than humans. According to the report, the company’s opening code was stated by the CEO on Thursday. During the event, San Francisco -based AI firm issued two new Claude 4 models, including several new abilities, including memory and better use of tools. Amody also allegedly suggested that when critics were trying to find road blocks for AI, “they won’t appear anywhere.”
Antropic CEO eliminates AI fraud
Tech Crunch reports that Amody made this comment during a press briefing, while he was explaining how there was no limit for AI to reach Artificial General Intelligence (AGI). Answering a question from the post, the CEO allegedly said, “It is really up to how you measure it, but I suspect that AI models may be less deceitful to humans, but they deceive in more amazing ways.”
Amoodi added that TV broadcasters, politicians, and other professions make regular mistakes, so AI’s mistakes AI do not go away from its intelligence, according to this report. However, the CEO allegedly acknowledged that AI models confidently respond to the wrong reaction with confidence.
According to a Bloomberg report, earlier this month, Anthropic’s lawyer was forced to apologize in the courtroom when his cloud chat boot added a false reference to the filing. The incident took place during the ongoing AI firm case against music publishers for alleged copyright violations of at least 500 songs.
In an October 2024 dissertation, Amody has claimed that Anthropic can get AGI with next year. AGI refers to a type of AI technology that can understand, learn and apply knowledge in many tasks, and can put into practice functions without the need for human intervention.
As part of its vision, Anthropic released Claude Ops 4 and Claude Swant 4 during the developer conference. These models make great improvement in coding, device use and writing. Claude Swant 4 scored 72.7 % on the SWE Bench Benchmark, which won the latest (SOTA) distinction in writing the code.


