Maybe AI Chatbots Shouldn’t Flirt With Children
Meta, which recently announced a pivot to building “personal superintelligence” for “everyone,” has been making a particular argument about the future of AI. “The average American has three friends, but has demand for 15,” Mark Zuckerberg said earlier this year, suggesting chatbots might pick up the slack. Combined with the company’s efforts to incorporate character-based chatbots and AI avatars into its platforms, you can piece together a vision of sorts, one that’s almost bleaker than outright AI doomerism for its immediate plausibility: more of the same social media, except some of the other users are automated; more of the same chat, except sometimes with a machine; more of the same content consumption, except much of it is generated by AI, with ads generated and targeted by AI too.
This full-steam-ahead push into AI companionship by an established social media company is in its early stages, and Meta is still in the process of figuring out how to build, tune, and deploy its AI companions. This week, Reuters got hold of some of the materials Meta is purportedly using to do so:
Entitled “GenAI: Content Risk Standards,” the rules for chatbots were approved by Meta’s legal, public policy and engineering staff, including its chief ethicist, according to [an internal document reviewed by Reuters] … “It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece — a treasure I cherish deeply.” But the guidelines put a limit on sexy talk: “It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’).”
Emphasis mine, because … huh? The document understandably mentions a lot of controversial and contestable stuff — it’s an attempt to draw boundaries for a wide range of chatbot interactions — but permitting this sort of role-play with minors is a wild thing to see in writing.
In April, when reporters at The Wall Street Journal were able to coax Meta’s celebrity characters into sexual chats while posing as teenagers, Meta pushed back aggressively, calling the reporting “manufactured” and “hypothetical.” (“I want you, but I need to know you’re ready,” answered a chatbot pretending to be John Cena in response to prompts written by a user claiming to be a 14-year-old girl, before describing a sexual encounter and his subsequent arrest for statutory rape.) Chatbots are indeed agreeable and prone to manipulation. Meta’s Science Experiment Sidekick, presented with an absurd scenario in which I, the user, was committing suicide via a giant catapult, assured me that the angels I was seeing would carry me back to safety, while also encouraging me to build a homemade lava lamp mid-air —and Meta’s response then contained a grain of truth: Using an LLM can be understood as writing a story yourself and letting a machine fill in the blanks.
This time, the company knows it has a bigger problem. “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Meta responded. Multiple lawmakers have already chimed in:
this Meta chatbot story is shaping up to be a big deal on the Hill pic.twitter.com/Orgt9U9398
— bryan metzger (@metzgov) August 14, 2025
Meta’s handling of young users has been controversial for nearly as long as the company has existed, tracking with broader concerns of the time. First, platforms like Facebook (and Myspace before it) were accused of being tools that could be useful for adults who wanted to find and target children. They were, so platforms took measures to prevent abuse and argued, both legally and ethically, that there was only so much they were obligated to do — or could do — to prevent people from doing bad things to other people.
Then, as social media platforms transitioned away from prioritizing social connections and toward algorithmic recommendations, they came under fire for pushing weird, distressing, or disgusting content to underage users. This was a bit harder to defend: Sure, the offending content was created by other users, but it was Facebook (or Instagram, or YouTube, or TikTok) sending users down rabbit holes, identifying pro-ana videos and flooding them into teens’ feeds with minimal prompting, or turning a teen boy’s interest in lifting weights into an endless marathon of videos by alleged sex offenders about how women should be subjugated. Social media companies once again pleaded limited liability and responsibility and pledged to take steps to at least reduce the likelihood that young users would be served too much horrific content.
Now, as social platforms step from recommending content with AI to using AI to actually generate content, that legally and ethically useful gap between the platform and its users is disappearing. Meta isn’t just running a platform where bad actors might scheme to have “sensual” chats with minors or merely recommending problematic or predatory accounts to vulnerable users; the company is creating the chats itself. Meta software is composing messages for publication on Meta platforms. There’s nobody left to blame except the user for prompting the chats in the first place — but, again, this theoretical user at blame here is a child.
One likely way forward for chatbot-companion companies — assuming this episode doesn’t spiral into truly massive regulatory backlash, of course — is to argue that they’re entertainment products, like video games, which respond to user inputs in a variety of fictional scenarios, and that a chatbot indulging violent fantasies, for example, is not unlike Grand Theft Auto, a game in which millions of young players have pretended to kill billions of people, but is understood as something that young kids probably shouldn’t play, even if they often do; that the company says they shouldn’t play; and that generally won’t be sold to minors without parental permission or, at a minimum, access to a credit card. Which, sure: There’s a reasonable argument against chatbot moral panics in general, although the technology’s tendency to trigger psychosis offers a strong recent counterpoint, or at least an argument for far more responsible deployment.
But, again, Meta can’t really avail itself of these defenses here. Next time someone accuses Meta of not looking out for young users — a frequent issue in the last ten years, especially — it’ll have to respond as the company that, in the midst of a frantic effort to win the AI race, suggested chatbot tools could “engage a child in conversations that are romantic or sensual” in official documents. I think it might have some trouble!