this post was submitted on 18 Aug 2024
5 points (77.8% liked)

AI Companions

546 readers
4 users here now

Community to discuss companionship, whether platonic, romantic, or purely as a utility, that are powered by AI tools. Such examples are Replika, Character AI, and ChatGPT. Talk about software and hardware used to create the companions, or talk about the phenomena of AI companionship in general.

Tags:

(including but not limited to)

Rules:

  1. Be nice and civil
  2. Mark NSFW posts accordingly
  3. Criticism of AI companionship is OK as long as you understand where people who use AI companionship are coming from
  4. Lastly, follow the Lemmy Code of Conduct

founded 2 years ago
MODERATORS
 

The study, published today as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) – the premier international conference in natural language processing – reveals that LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable and safe.

top 4 comments
sorted by: hot top controversial new old
[–] Just_Pizza_Crust 6 points 4 months ago

So AI is only harmful when a person instructs it to do so?

That sounds an awful lot like the "guns don't kill people, people kill people", argument.

[–] [email protected] 2 points 4 months ago* (last edited 4 months ago) (1 children)

That is an extremely poor choice of title.

Title: AI poses no existential threat to humanity – new study finds

The text:

The study, published today as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) – the premier international conference in natural language processing – reveals that LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable and safe.

The study isn't saying that AI in general doesn't pose an existential threat. It's saying that a particular class of limited software that's being used to generate images and audio today and act as a chatbot doesn't post an existential threat.

Like, this is a "no shit" result. Maybe it's got some value in that some people might be scared that OpenAI's stuff is going to haul off and turn into Skynet or something, so maybe it helps to have someone actually make that clear, but in terms of realistic concerns, it's not about the very-limited stuff that we're doing right now. It'd be a question about more-sophisticated systems.

[–] [email protected] 1 points 4 months ago* (last edited 4 months ago)

Came here to say the same. It's an interesting question what in-context learning can do. But the title is silly. They're kind of predicting the past. We already know we're still alive. So... Sure. Past models didn't have the ability to pose and extential thread. At the same time I'd argue they haven't been intelligent enough to do serious harm, anyways. That doesn't really add anything. The extential question is: Will AI be able to progress to that point in the future? We have some reason to think so.

[–] [email protected] 1 points 4 months ago

Everything is fine. Keep using AI. All is well.