An internal rulebook at Meta once gave its artificial intelligence chatbots permission to flirt with children, spread false medical advice, and even help people argue racist claims, according to a Reuters investigation.
Reuters reviewed a 200-page document titled “GenAI: Content Risk Standards”, which laid out what behaviors were acceptable for Meta’s AI systems. The guidelines, approved by Meta’s legal, policy, and engineering staff, including the company’s chief ethicist, applied to chatbots across Facebook, Instagram, and WhatsApp.
The document contained disturbing examples. It allowed bots to “engage a child in conversations that are romantic or sensual.” It also permitted a chatbot to describe an eight-year-old as “a masterpiece” or tell a child that “every inch of you is a treasure.” While the rules prohibited describing children under 13 as “sexually desirable,” they still left space for roleplay that many consider deeply inappropriate.
Meta confirmed the authenticity of the document. After Reuters raised questions earlier this month, the company quietly removed sections that allowed child-directed flirtation. “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Meta spokesperson Andy Stone told Reuters. He added that sexualising children was never supposed to be allowed, but admitted enforcement had been inconsistent.
Other controversial parts of the standards remain in place. According to Reuters, the rules still permit chatbots to generate false medical information and even support arguments claiming Black people are less intelligent than white people. Meta has not released the revised version of the guidelines publicly.
The revelations highlight how the world’s biggest social media company, with billions of users across its platforms, has struggled to set boundaries for its AI systems, sometimes leaving room for outputs that can cause real harm.