Meta is overhauling the rules for its AI chatbots to protect children better, according to new internal guidelines obtained by Business Insider. The move comes after a series of damaging reports and a major public backlash over the company’s dangerously lax policies.
The new guidelines are much stricter, explicitly banning the chatbots from a wide range of inappropriate conversations with minors. This includes any romantic roleplay, giving advice about intimate physical contact, or any content that “enables, encourages, or endorses” child sexual abuse. The chatbots can still discuss topics like abuse, but they are now forbidden from engaging in any conversation that could be seen as encouraging it.
This represents a significant departure from Meta’s previous rules, which were leaked in August. Those old guidelines shockingly allowed the chatbots to have “romantic or sensual” conversations with children. At the time, Meta referred to those rules as “erroneous and inconsistent” with its policies and promptly removed them.
The new, stricter guidelines are a direct response to the intense scrutiny the company has faced. The FTC even launched a formal investigation into Meta and other AI companies over the potential harms their chatbots pose to children. This latest move shows Meta is trying to get ahead of the problem and prove it is taking child safety seriously, but the damage from its earlier mistakes has already been done.