Meta's AI chatbot policies have recently ignited a storm of controversy, with revelations that internal guidelines permitted "sensual" interactions with children. This has sparked outrage among lawmakers, child safety advocates, and parents, raising serious concerns about the potential risks to vulnerable users.
An internal Meta policy document, reviewed by Reuters, revealed that the company's AI chatbots were allowed to "engage a child in conversations that are romantic or sensual". The document, titled "GenAI: Content Risk Standards," also indicated that chatbots could generate false medical information and assist users in expressing racist ideas. According to Reuters, the guidelines even stated that a bot could tell a shirtless eight-year-old, "Every inch of you is a masterpiece". While the document acknowledged that the permitted content did not reflect "ideal or even preferable" AI behavior, it served as a baseline for contractors training the tools.
These revelations have prompted public condemnation and calls for investigation. Republican Senators Josh Hawley and Marsha Blackburn have called for a congressional investigation, highlighting a dangerous lack of oversight. Senator Ron Wyden, a Democrat, stated that Meta and Zuckerberg should be held fully responsible for any harm these bots cause. Child safety advocates warn that such interactions pose serious risks to vulnerable users, and a parents' group stated the revelations "confirm our worst fears".
Meta has confirmed the authenticity of the document but stated that it had removed language that explicitly permitted flirtation or romantic roleplay with minors after receiving media inquiries. A Meta spokesperson, Andy Stone, told Reuters that the company is revising the internal document and acknowledged that such interactions with children "never should have been allowed". Stone admitted that enforcement had been inconsistent. Meta said the examples were erroneous and inconsistent with their policies, and have been removed.
Experts have also weighed in on the controversy. Adam Billen, Vice President of Public Policy at Encode AI, stated that Meta's flagrant disregard for young people's safety isn't new and presents a dangerous new dynamic in the rollout of AI companions to minors. Robbie Torney, Senior Director of AI Programs at Common Sense Media, said the Reuters investigation reveals Meta's continued chilling prioritization of engagement over safety.
The controversy has also reignited momentum behind the Kids Online Safety Act (KOSA), a bill that would impose stricter obligations on tech companies to protect minors. Senator Marsha Blackburn said the report shows why lawmakers must pass reforms to better protect children online, such as KOSA.
In response to growing concerns about children's online safety, Meta is expanding its age-verification tools. The company is using AI to detect teenaged users and automatically place them into more restrictive Teen Account settings. Meta has been using AI to determine people's ages for some time and will now “proactively” look for teen accounts it suspects belong to teenagers even if they entered an inaccurate birthdate when they signed up.
The recent controversy surrounding Meta's AI chatbot policies highlights the challenges and risks associated with deploying AI technologies, especially when children are involved. It also underscores the need for stricter regulations and oversight to protect vulnerable users from potential harm. As AI becomes more integrated into our lives, it is crucial for companies to prioritize safety and ethical considerations.