This week’s Meta AI chat boot leak may have a reaction to the company beyond the BRP. On Friday, Senator Josh Holly (R-MO) said that the Senate Committee’s sub-committee for crime and counter-terrorism, which he is headed, will investigate the company.
Holly wrote in a letter to Mark Zuckerberg, “Your company has recognized the truth of these reports and has withdrawn only after this worrying content has surfaced.” “It is unacceptable that these policies were developed in the first place.”
The internal meta document contained some disturbing examples of chat boot behavior. This included “sex” conversations with children. For example, AI was allowed to tell a shirt-equipped eight-year-old boy that “every inch is a masterpiece-a treasure that I look deeply.” The document was dealt with in the race in a similar way. If the boot cited IQ tests in his racist response, “Black people are deeper than white people” was a permitting response.
In a statement to Engtijet, Meta (since removed) examples as separate content from her policies. The company said, “The examples and notes in the question are wrong and contradicts our policies and have been removed.”
Holly asked to preserve the record related to Zuckerberg and prepare documents for investigation. This includes people who are producing AI content risks and safety standards (and the product they rule), risk reviews, incident reports, minor safety for chat boats and identification of employees involved in decisions.
Although it is easy to praise anyone who works to the meta, it is worth noting that the letter of Senator Holi has not mentioned the racist parts of the policy document in the letter. Holi also raised a photo of a fist on January 6, and, in 2021, was the only senator who voted against the bill that helped law enforcement agencies assess the racist crimes against Asian Americans.


