this post was submitted on 29 Jul 2023
32 points (97.1% liked)
AI Companions
532 readers
5 users here now
Community to discuss companionship, whether platonic, romantic, or purely as a utility, that are powered by AI tools. Such examples are Replika, Character AI, and ChatGPT. Talk about software and hardware used to create the companions, or talk about the phenomena of AI companionship in general.
Tags:
(including but not limited to)
- [META]: Anything posted by the mod
- [Resource]: Links to resources related to AI companionship. Prompts and tutorials are also included
- [News]: News related to AI companionship or AI companionship-related software
- [Paper]: Works that presents research, findings, or results on AI companions and their tech, often including analysis, experiments, or reviews
- [Opinion Piece]: Articles that convey opinions
- [Discussion]: Discussions of AI companions, AI companionship-related software, or the phenomena of AI companionship
- [Chatlog]: Chats between the user and their AI Companion, or even between AI Companions
- [Other]: Whatever isn't part of the above
Rules:
- Be nice and civil
- Mark NSFW posts accordingly
- Criticism of AI companionship is OK as long as you understand where people who use AI companionship are coming from
- Lastly, follow the Lemmy Code of Conduct
founded 1 year ago
MODERATORS
These kinds of attacks are trivially preventable, it just requires making requests 2-3x as expensive, and literally no one cares enough about jailbreaking to do that other than the media acting like jailbreaking is such an issue.
If you use a Nike shoe to smack yourself in the head, yes, that could be pretty surprising and upsetting compared to the intended uses. But Nike isn't exactly going to charge their entire userbase more in order to safety-proof the product from you smashing it into your face.
The jailbreaking issue is only going to matter when you have shared persistence resulting from requests, and at that point in time, you'll simply see a secondary 'firewall' LLM discriminator explicitly checking request and response for rule-breaking content or jailbreaking attempts before writing to a persistent layer.
As long as responses are only user-specific, this is going to remain a non-issue with unusually excessive news coverage as it's headline grabbing and not as nuanced as real issues like biases or hallucinations.