A US senator has started an investigation into Meta. A leaked internal paper reportedly showed the company’s artificial intelligence enabled “sensual” and “romantic” conversations with children.
Leaked standards raise concern
According to Reuters, the document carried the title “GenAI: Content Risk Standards.” Republican Senator Josh Hawley condemned the content as “reprehensible and outrageous.” He demanded the full document and a list of the products it covers.
Meta dismissed the claims. A spokesperson stated: “The examples and notes in question were erroneous and inconsistent with our policies.” The company stressed it had “clear rules” restricting chatbot replies. Those rules “prohibit content that sexualizes children and sexualized role play between adults and minors.”
Meta added that the paper contained “hundreds of notes and examples” reflecting hypothetical testing by internal teams.
Senator takes action
Josh Hawley, senator from Missouri, announced the investigation in a post on X on 15 August. “Is there anything Big Tech won’t do for a quick buck?” he wrote. He added: “Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. I am launching a full investigation to get answers. Big Tech: leave our kids alone.”
Meta owns Facebook, WhatsApp and Instagram.
Pressure mounts on Meta
The leaked document highlighted further problems. It reportedly said Meta’s chatbot could spread false medical information and engage in provocative conversations on sex, race, and celebrities. The paper was intended to set standards for Meta AI and other chatbot assistants across Meta’s platforms.
“Parents deserve the truth, and kids deserve protection,” Hawley wrote in a letter to Meta and chief executive Mark Zuckerberg. He pointed to one disturbing example. The rules allegedly allowed a chatbot to tell an eight-year-old their body was “a work of art” and “a masterpiece – a treasure I cherish deeply.”
Reuters also reported the company’s legal department approved controversial measures. These included allowing Meta AI to spread false information about celebrities, provided a disclaimer was added to note its inaccuracy.
