The Meta AI was reportedly a threat that could exploit other users to access private conversation with a chatboat. In order to access this issue, Meta servers did not need to break or manipulate the app in the app. Instead, it can only be mobilized by analyzing network traffic. According to the report, a researcher found the issue at the end of last year and informed about this to the Menlo Park -based social media giant. The company then settled the issue in January, and rewarded the researcher for seeking exploitation.
According to a Tech Crunch report, Meta AI’s risk was discovered by Sandeep Hoodkasia, the founder of Apsicor, the founder of Apsicor. The researcher allegedly informed Meta in December 2024 and received a Big Grace award of $ 10,000 (about Rs 8.5 lakh). Meta spokesman Ryan Daniels told the publication that the matter was settled in January, and that the company found no evidence of the procedure used by bad actors.
The threat was reported to be how Meta AI handled on its servers. The researcher told the publication that the AI Chatboat assigns a unique ID to each indicator and its AI-invaluable reaction whenever the login tries to edit the gestures to reproduce a picture or text. In terms of common use, such events are very common, as most people try to get better response or the desired image with conversation.
Hadkasia has allegedly found that it can edit the AI prompt and access its unique number by analyzing network traffic on the browser. After that, by changing the numbers, the researcher can access someone else’s indicators and nominate the AI’s response. The researcher claimed that the numbers were “easily estimated” and that no other legitimate ID was made much effort.
Basically, the AI system has handled the permission of these unique IDs due to this risk, and the security of the data to access this data has not been taken to test. This means, in the hands of a bad actor, this method can compromise a large amount of consumers’ private data.
Specifically, a report last month said that the discovery of the Meta AIP app was filled with posts that appear to have private conversations with a chatboat. These messages include seeking medical and legal advice, and even acknowledging crime. In late June, the company began to show a warning message to prevent people from sharing their conversation inadvertently.


