A threat discovered last year by a CyberScophater expert has shown that Meta AI Chatboat is allowing users to allow private indicators and AI -affected reactions through a flaw.
As reported by Cyber News, Meta has settled the issue since then, however, for the unexpected time, users had unauthorized access to the indications and responses of another user as a result of leakage.
The threat, which, for the first time, on December 26, 2024, was revealed by the founder of CyberScureti and the founder of Epsiakor Sandeep Hodkasia on January 24, 2025 with a fix. Hodkasia was doing such research that Meta AI allowed the text to be reproduced. When a user modifies their AI prompt, Meta servers assign it a unique number and AI-generated reactions.
Don’t give them
Hodkasia analyzed its browser network traffic by editing the AI prompt, and found that it could edit the number so that the servers could be returned immediately and reacts by another user. This means that the servers were not checking that the user was given the option to see the user to indicate and request his response.
Meta corrected the flaw and paid $ 10,000 Big Grace to the company’s spokesman, Hudkasia, but the company has no evidence that the company has no evidence that the error has been exploited in the wild. This weakness is after a last month where the Meta AI conversation was used in the app publicly, highlighting users’ questions, highlighting how easy it is to negotiate AI Chat to cross the security lines.
Since more and more companies start using chat boats, they should regularly ensure that these chats are private and confidential by checking potential protective flaws – especially if there may be sensitive information in the history of chat.
Process Tom’s leader on Google News Our latest news in our feeds, how, and get reviews. Make sure to click the follow button.


