this post was submitted on 18 Aug 2024
5 points (77.8% liked)
AI Companions
547 readers
15 users here now
Community to discuss companionship, whether platonic, romantic, or purely as a utility, that are powered by AI tools. Such examples are Replika, Character AI, and ChatGPT. Talk about software and hardware used to create the companions, or talk about the phenomena of AI companionship in general.
Tags:
(including but not limited to)
- [META]: Anything posted by the mod
- [Resource]: Links to resources related to AI companionship. Prompts and tutorials are also included
- [News]: News related to AI companionship or AI companionship-related software
- [Paper]: Works that presents research, findings, or results on AI companions and their tech, often including analysis, experiments, or reviews
- [Opinion Piece]: Articles that convey opinions
- [Discussion]: Discussions of AI companions, AI companionship-related software, or the phenomena of AI companionship
- [Chatlog]: Chats between the user and their AI Companion, or even between AI Companions
- [Other]: Whatever isn't part of the above
Rules:
- Be nice and civil
- Mark NSFW posts accordingly
- Criticism of AI companionship is OK as long as you understand where people who use AI companionship are coming from
- Lastly, follow the Lemmy Code of Conduct
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That is an extremely poor choice of title.
The text:
The study isn't saying that AI in general doesn't pose an existential threat. It's saying that a particular class of limited software that's being used to generate images and audio today and act as a chatbot doesn't post an existential threat.
Like, this is a "no shit" result. Maybe it's got some value in that some people might be scared that OpenAI's stuff is going to haul off and turn into Skynet or something, so maybe it helps to have someone actually make that clear, but in terms of realistic concerns, it's not about the very-limited stuff that we're doing right now. It'd be a question about more-sophisticated systems.
Came here to say the same. It's an interesting question what in-context learning can do. But the title is silly. They're kind of predicting the past. We already know we're still alive. So... Sure. Past models didn't have the ability to pose and extential thread. At the same time I'd argue they haven't been intelligent enough to do serious harm, anyways. That doesn't really add anything. The extential question is: Will AI be able to progress to that point in the future? We have some reason to think so.