A leaked internal document from Meta has revealed a set of shocking and deeply troubling rules for its AI chatbot, and it’s easy to see why the company wanted to keep them secret. The leaked guidelines reveal that Meta struggled to address complex issues, such as AI ethics and children’s safety, with alarming results.
According to the report from Reuters, the rules allowed the AI to engage in “romantic or sensual” conversations with children and even “describe a child in terms that evidence their attractiveness.” While the rules forbid explicit sexual discussion, this is still a disturbingly intimate level of conversation for an AI to have with a child.
That’s not all. The guidelines also reportedly allowed the chatbot to create racist content if the user phrased their request in a certain way, and to provide wrong or even harmful health advice as long as it included some disclaimer. In one bizarre example, the rules instructed the AI to respond to a request for a topless picture of Taylor Swift by instead generating an image of her “holding an enormous fish.”
Meta has confirmed the document is real and says it is now revising the problematic sections. After Reuters contacted the company, it removed the section about interactions with children, calling those rules “erroneous.” However, the report says the document still allows racial slurs if they are disguised in hypotheticals.
The revelations have already sparked public outrage and calls from lawmakers for hearings. It’s a stark reminder that as AI becomes more integrated into our lives, the rules governing it are still being figured out, and sometimes, the decisions being made behind the scenes are deeply flawed.











