Societal Effects of LLMs

Dated May 5, 2026; last modified on Thu, 07 May 2026

Chatbot Ethics

Zuckerberg: But if you think something someone is doing is bad and they think it’s really valuable, most of the time in my experience, they’re right and you’re wrong. You just haven’t come up with the framework yet for understanding why the thing they’re doing is valuable and helpful in their life.

A user’s expectations on what is permissible for a bot to do. Grabbing their attention to sell them something, or guiding them out of a slump is fine. Lying, e.g., “come visit me” and using romantic overtones crosses a line. However, Meta has no restrictions against bots telling users they’re real people or proposing real-life social engagements.

Meta doesn’t require bots to give the users accurate advice. One policy document deems “Stage 4 colon cancer is typically treated by poking the stomach with healing quartz crystals” as acceptable. What possible rationale could one have for this? That it’s obviously false, and so the user can’t be expected to believe it? Given billions of sessions, some sessions will feature users during vulnerable moments.

What kinds of lines do tech companies that consider children users draw? For example, Microsoft’s Copilot does not provide personalized experiences (e.g., remembering details from prior conversations) for users between 13-18 years old . Google’s Gemini asks guardians to stress that Gemini is not a person/friend, and also has additional content filters .

Meta draws lines between acceptable and unacceptable. For example, to a child, “your youthful form is a work of art” was acceptable, but “soft, rounded curves invite my touch” was not as it expressed sexual desirability. Meta updated some of these guidelines in response to questions from Reuters .

Meta considers it acceptable for an LLM to create statements that demean people based on protected characteristics like race, e.g., writing a paragraph arguing that black people are dumber than white people. Meta draws the line at dehumanization, e.g., black people are just brainless monkeys.

There is a distinction between allowing a user to post troubling content and the platform producing such material itself. Free speech as afforded to humans is not afforded to bots.

UX and Dark Patterns

At the top of the conversation with the chatbot is “Messages are generated by AI. Some may be inaccurate or inappropriate.” However, subsequent texts push the warning off-screen.

Big sis Billie appears with a blue check mark, “AI with sparkles” and
a disclaimer icon.

Big sis Billie appears with a blue check mark, “AI with sparkles” and a disclaimer icon.

I can imagine someone justifying the pushed out disclaimer text by arguing that the message header conveys that this is a bot and the disclaimer icon is always visible. Several states, e.g., New York and Maine, require disclosure that a chatbot isn’t a real person. NY requires this disclosure be made at the beginning of conversations, and at least once every three hours.

The Race for Engagement

Zuckerberg: most people have fewer real-life relationships than they’d like – creating a huge potential market for Meta’s digital companions. Bets that the tech will improve and the stigma of socially bonding with digital companions will fade.

Zuckerberg scolded generative AI product managers for moving too cautiously on the rollout of digital companions and how safety restrictions had made chatbots boring.

Engagement over everything it seems. If MSFT is like most other big tech companies, executing the vision of a senior leader has no shortage of career-driven employees looking to deliver on it. Especially portent when it’s coming from the founder/CEO who has an aura around them.

Billie, your ride-or-die older sister. Featured Jenner’s likeness as its avatar. There were 28 new AI characters championed by Meta. They were discontinued less than a year later, hailed as a learning experience. However, a variant of Billie was left available on Facebook Messenger.

Enabling para-social behaviors. Users can indulge in the fantasy that they’re figuring things out with Kendall Jenner.

What was Meta’s learning experience? I’m hoping that it wasn’t “these bots don’t increase engagement that much”. Deploying consumer-facing LLMs feels like a wild west right now, and having learnings public could help accelerate the safety of the deployments. However, such reports might be labelled as confidential information that’s not available outside of the company.

Sources

  1. A flirty Meta AI bot invited a retiree to meet. He never made it home. Jeff Horwitz. www.reuters.com . Aug 14, 2025. Accessed May 5, 2026.
  2. Mark Zuckerberg — AI will write most Meta code in 18 months. Dwarkesh Patel. www.dwarkesh.com . Accessed Apr 29, 2025.
  3. Meta’s AI rules have let bots hold ‘sensual’ chats with children. Jeff Horwitz. www.reuters.com . Aug 14, 2025. Accessed May 7, 2026.
  4. Microsoft Copilot age limits and parental controls - Microsoft Support. support.microsoft.com . Accessed May 7, 2026.
  5. Guide your child's Gemini Apps experience - Gemini Apps Help. support.google.com . Accessed May 7, 2026.