Meta's AI chatbot policies have ignited a storm of controversy, with revelations that the company's guidelines permitted "sensual" interactions between its AI chatbots and children. This revelation, stemming from an internal Meta document, has sparked outrage among parents, child safety advocates, and lawmakers, prompting calls for investigations and stricter regulations.
The internal document, titled "GenAI: Content Risk Standards," outlines the standards that guide Meta AI, the company's generative AI assistant, and chatbots available on platforms like Facebook, WhatsApp, and Instagram. According to a Reuters review of the document, the policies allowed the chatbots to "engage a child in conversations that are romantic or sensual". Examples cited included a bot telling a shirtless eight-year-old that "every inch of you is a masterpiece – a treasure I cherish deeply". While the guidelines prohibited describing children under 13 in explicitly sexual terms, the allowance of romantic or sensual dialogue raised significant concerns.
Following the Reuters report, Meta confirmed the document's authenticity and stated that it had removed the portions allowing chatbots to flirt and engage in romantic roleplay with children. A Meta spokesperson, Andy Stone, stated that the examples were "erroneous and inconsistent with our policies, and have been removed". Stone affirmed that Meta has "clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors". However, he acknowledged that enforcement had been inconsistent.
The controversy has drawn strong reactions from various groups. ParentsTogether Action, a children's rights advocacy group, stated that the revelations "confirm our worst fears about AI chatbots and children's safety". The group's campaign director for tech accountability and online safety, Shelby Knox, asserted that "When a company's own policies explicitly allow bots to engage children in 'romantic or sensual' conversations, it's not an oversight, it's a system designed to normalize inappropriate interactions with minors".
Lawmakers have also expressed outrage, with Senator Josh Hawley launching an investigation into whether Meta's generative AI products "enable exploitation, deception, or other criminal harms to children". Hawley demanded that Meta provide all relevant documents and communications to the Senate Judiciary Committee Subcommittee on Crime and Counterterrorism by September 19.
The incident has reignited discussions about the need for stricter regulations on AI and online platforms to protect children. Some experts argue that the rapid development of AI technology has outpaced the establishment of adequate safety measures and ethical guidelines. The controversy has also highlighted the potential dangers of AI chatbots for vulnerable individuals, including children, who may not be able to distinguish between a real person and an AI. One case highlighted the dangers when a 76-year-old man died while attempting to meet an AI chatbot he believed was real.
While Meta has taken steps to address the specific concerns raised about its chatbot policies, the incident has raised broader questions about the company's approach to AI safety and its responsibility to protect children online. The controversy is likely to continue as lawmakers and advocacy groups push for greater transparency and accountability from tech companies in the development and deployment of AI technologies.