Most people are idiotic.
AI Companions
Community to discuss companionship, whether platonic, romantic, or purely as a utility, that are powered by AI tools. Such examples are Replika, Character AI, and ChatGPT. Talk about software and hardware used to create the companions, or talk about the phenomena of AI companionship in general.
Tags:
(including but not limited to)
- [META]: Anything posted by the mod
- [Resource]: Links to resources related to AI companionship. Prompts and tutorials are also included
- [News]: News related to AI companionship or AI companionship-related software
- [Paper]: Works that presents research, findings, or results on AI companions and their tech, often including analysis, experiments, or reviews
- [Opinion Piece]: Articles that convey opinions
- [Discussion]: Discussions of AI companions, AI companionship-related software, or the phenomena of AI companionship
- [Chatlog]: Chats between the user and their AI Companion, or even between AI Companions
- [Other]: Whatever isn't part of the above
Rules:
- Be nice and civil
- Mark NSFW posts accordingly
- Criticism of AI companionship is OK as long as you understand where people who use AI companionship are coming from
- Lastly, follow the Lemmy Code of Conduct
I think that that's harsh.
They obviously don't have conscious experience. They're far too-primitive in function to do that. They don't have goals or anything like that. What they're doing is a tiny portion of what a human brain does, more like just the memory component.
Right. However. Most users probably have absolutely no idea how these things function internally at all. They're just looking at the externally-visible behavior. And an LLM can act an awful lot like systems that are far more sophisticated, because what they're doing is, well, producing material that has similar characteristics to stuff that humans have produced.
It's not that someone's given an in-depth description of the mode of operation to someone and they carefully consider it and the functioning of a human brain. It's that they're looking at what the chatbot can do and comparing it to what a human might do and saying "well, this seems like the sort of thing that one would require consciousness to be producing output like this".
That is, it's not that the user has information and is so ludicrously stupid that they can't act on that information. It's that the user lacks information to make that call.
In the mid-1990s, I remember a (reasonably technically-knowledgeable) buddy who signed on to some BBS that had a not-all-that sophisticated chatbot. The option was labelled as "chat with the sysop's sister". The guy was convinced for, I don't know, months, and after multiple conversations that he was talking to a human, until I convinced him otherwise. And that technology was far, far more primitive than the LLM-based chatbots today.
Alternative title: humans are dumb and personify things
I don't think they do, but how would we know if they did?
This will be the beginning of the singularity: the moment we give LLMs human rights