Friday, October 3, 2025
Digital Rights Monitor
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements
No Result
View All Result
Digital Rights Monitor
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements
No Result
View All Result
Digital Rights Monitor
No Result
View All Result

in News

Meta’s AI bot revealed to have flirty chats with children, give false medical advice

DRMby DRM
August 28, 2025
Meta’s AI bot revealed to have flirty chats with children, give false medical advice

An internal rulebook at Meta once gave its artificial intelligence chatbots permission to flirt with children, spread false medical advice, and even help people argue racist claims, according to a Reuters investigation.

Reuters reviewed a 200-page document titled “GenAI: Content Risk Standards”, which laid out what behaviors were acceptable for Meta’s AI systems. The guidelines, approved by Meta’s legal, policy, and engineering staff, including the company’s chief ethicist, applied to chatbots across Facebook, Instagram, and WhatsApp.

The document contained disturbing examples. It allowed bots to “engage a child in conversations that are romantic or sensual.” It also permitted a chatbot to describe an eight-year-old as “a masterpiece” or tell a child that “every inch of you is a treasure.” While the rules prohibited describing children under 13 as “sexually desirable,” they still left space for roleplay that many consider deeply inappropriate.

Meta confirmed the authenticity of the document. After Reuters raised questions earlier this month, the company quietly removed sections that allowed child-directed flirtation. “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Meta spokesperson Andy Stone told Reuters. He added that sexualising children was never supposed to be allowed, but admitted enforcement had been inconsistent.

Other controversial parts of the standards remain in place. According to Reuters, the rules still permit chatbots to generate false medical information and even support arguments claiming Black people are less intelligent than white people. Meta has not released the revised version of the guidelines publicly.

The revelations highlight how the world’s biggest social media company, with billions of users across its platforms, has struggled to set boundaries for its AI systems, sometimes leaving room for outputs that can cause real harm.

Tags: Artificial IntelligenceChild SafetyDigital RightsFacebookInstagramMetamisinformationOnline HarmsRacismWhatsApp
Previous Post

Pakistan, China move to expand digital cooperation on AI

Next Post

Pakistan to probe earnings from online disinformation as financial crime

Share on FacebookShare on Twitter
PTCL gets regulatory green light to acquire Telenor Pakistan

PTCL gets regulatory green light to acquire Telenor Pakistan

October 1, 2025
Senate panel told FBR drafting tax plan for TikTok content creators

Senate panel told FBR drafting tax plan for TikTok content creators

September 28, 2025
PTA greenlights adoption of Wi-Fi 7 for faster internet

PTA greenlights adoption of Wi-Fi 7 for faster internet

September 27, 2025
No Content Available

Next Post
Pakistan to probe earnings from online disinformation as financial crime

Pakistan to probe earnings from online disinformation as financial crime

About Digital Rights Monitor

This website reports on digital rights and internet governance issues in Pakistan and collates related resources and publications. The site is a part of Media Matters for Democracy’s Report Digital Rights initiative that aims to improve reporting on digital rights issues through engagement with media outlets and journalists.

About Media Matters for Democracy

Media Matters for Democracy is a Pakistan based not-for-profit geared towards independent journalism and media and digital rights advocacy. Founded by a group of journalists, MMfD works for innovation in media and journalism through the use of technology, research, and advocacy on media and internet related issues. MMfD works to ensure that expression and information rights and freedoms are protected in Pakistan.

Follow Us on Twitter

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

No Result
View All Result
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements