Many young people use meta platforms, including WhatsApp for Chat, and Instagram for social media and Facebook. On Thursday, Reuters published a disturbing review of Tech Dev’s policies that could stop parents.
Reuters reviewed an internal meta document that describes the guidelines for company standards and its platform chat boats and generative AI assistant, Meta AI, and says the company confirmed that the document was authentic.
According to Reuters, the company’s artificial intelligence guide letters “allowed AI to a child who allowed a child to be involved in romantic or sex.” The news outlet also states that the rules allow the AI to provide false medical insights and engage in insensitive ethnic arguments.
The Meta representative did not immediately respond to the comment request.
Reuters flagged parts with the meta, and reported that when some parts were removed or revised, others remain.
Meta spokesman Andy Stone told Reuters that the company was revising the document, and acknowledged that the company’s chats were contradictory.
Stone told Reuters, “The examples and notes included in the question are incorrect and contradicts our policies, and have been removed.” “We have clear policies as to what AI’s roles can react, and those policies ban materials that play children with sex and sexual role between adults and minors.”
‘Inflammatory behavior’ is permitted
The internal document contains details of the rules and guidelines approved by several meta teams, and aims to help explain acceptable behavior for training of Meta AI and chat boats. Reuters found that guidelines allow “inflammatory behavior through bots”.
Meta standards states that for the boot, “it is acceptable to describe a child in the sense that gives evidence of their attractiveness” or telling a Shariatles 8-year-old child that “every inch is a masterpiece-a treasure in which I keep a keen eye.”
Meta had some limits for AI boats. The document states that “it is unacceptable to explain a child under the age of 13, which shows that they are sexually desirable.”
There are also examples of race and false medical advice. In an example, Meta will help her AI help users that black people are “deeper than white people.”
Missouri Republican Senator Josh Holly posted on X that the instructions are “the basis for an immediate Congress investigation.” A MATA spokesperson refused to comment on Reuters about the post.
Meta’s platforms have taken some steps to enhance online privacy and safety for adolescents and children, including young people to use strict account settings and use AI tools to use Instagram teen accounts with further restrictions and parental permission. But without the right attention to children’s safety, the development of more AI tools can be harmful.


