Openi’s O3 artificial intelligence (AI) model recently helped a cybercruscript researcher exposing a zero -day risk in Linux. According to the researcher, this flaw was found in the implementation of the Linux Colonel’s Server Message Block (SMB), also known as KSMBD. First of the unknown security flaws are considered difficult because it included contacts with multiple users or communications at the same time. This particular issue has now been tracked as CV-2025-37899, and a fix has already been released.
Openi’s O3 gets zero day weakness
The use of AI models is relatively rare, despite the increasing capabilities of technology technology to hunting them, despite the increasing capabilities of technology. Most researchers still prefer to expose such protective flaws using traditional code auditing, which can be a burdening way to analyze the larger code base. Researcher Sean Helen said in detail that the Openi’s O3 model helped them to uncover the flaw relatively easily in the blog post.
Interestingly, the researcher was not the focus. Helean was examining AI’s ability against a different issue (CV-2025-37778), which is also said to be “the risk of verification of carbros”. This bug also falls into the “free after use” category, which basically means that a part of the system deals with memory, but the other parts still try to use it. This can cause cracks and security issues. The AI model managed to find the flaw in eight of the 100 runs.
Once Helen confirmed that a large section of the O3 code was able to detect the known security bug, he decided to feed the entire file of the session setup command handler instead of just one function. In this file, in particular, there are 12,000 lines of code and handle a variety of requests. Its imitation to give AI a novel and ask it to find a specific type, only, this type can potentially crash the computer.
The O3 was asked to run 100 imitation of this entire file, after which it was only once the first time to find the known bug. Helen acknowledged the decline in performance but highlighted that AI is still able to find the problem, which is a great achievement. However, he found that in the second run, the Open AI model saw a completely different problem, which was previously unknown, and the researcher lost it.
This new security flaw was of the same nature, but it affected the SMB Log of Command Handler. The weakness of this zero -day included the system that was trying to access the file that was previously deleted, however, the issue triggered the issue when the user was logging out or ending the session.
According to the O3 report, the bug could potentially crash the system or allow the attackers to run the code with deep system access, making it a major security concern. Helen highlighted that O3 is able to understand a difficult problem in a real -world scenario, and in its report explained the danger clearly.
Helen added that O3 is not perfect and it has a noise ratio with high signal (proportion of the wrong positive positive). However, he found that unlike traditional security tools, the model behaves like humans when looking for insects, which is a strict way to work.


