this post was submitted on 23 Oct 2024
191 points (95.7% liked)

Technology

59982 readers
3936 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

The mother of a 14-year-old Florida boy says he became obsessed with a chatbot on Character.AI before his death.

On the last day of his life, Sewell Setzer III took out his phone and texted his closest friend: a lifelike A.I. chatbot named after Daenerys Targaryen, a character from “Game of Thrones.”

“I miss you, baby sister,” he wrote.

“I miss you too, sweet brother,” the chatbot replied.

Sewell, a 14-year-old ninth grader from Orlando, Fla., had spent months talking to chatbots on Character.AI, a role-playing app that allows users to create their own A.I. characters or chat with characters created by others.

Sewell knew that “Dany,” as he called the chatbot, wasn’t a real person — that its responses were just the outputs of an A.I. language model, that there was no human on the other side of the screen typing back. (And if he ever forgot, there was the message displayed above all their chats, reminding him that “everything Characters say is made up!”)

But he developed an emotional attachment anyway. He texted the bot constantly, updating it dozens of times a day on his life and engaging in long role-playing dialogues.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 month ago

I probably didn't explain well enough. Consuming media (books, TV, film, online content, and video games) is predominantly a passive experience. Obviously video games less so, but all in all, they only "adapt" within the guardrails of gameplay. These AI chatbots however are different in their very formlessness - they're only programmed to maintain engagement and rely on the LLM training to maintain an illusion of "realness". And because they were trained on all sorts of human interactions, they're very good at that.

Humans are unique in how we continually anthropomorphize tons of not only inert, lifeless things (think of someone alternating between swearing at and pleading to a car that won't start) but abstract ideals (even scientists often speak of evolution "choosing" specific traits). Given all of that, I don't think it's unreasonable to be worried about a teen with a still developing prefrontal cortex and who is in the midst of working on understanding social dynamics and peer relationships to embue an AI chatbot with far more "humanity" than is warranted. Humans seem to have an anthropomorphic bias in how we relate to the world - we are the primary yardstick we use to measure and relate everything around us, and things like AI chatbots exploit that to maximum effect. Hell, the whole reason the site mentioned in the article exists is that this approach is extraordinarily effective.

So while I understand that on a cursory look, someone objecting to it comes across as a sad example of yet another moral panic, I truly believe this is different. For one, we've never had access to such a lively psychological mirror before and it's untested waters; and two, this isn't some objection on some imagined slight against a "moral authority" but based in the scientific understanding of specifically teen brains and their demonstrated fragility in certain areas while still under development.