AI Companions

546 readers
6 users here now

Community to discuss companionship, whether platonic, romantic, or purely as a utility, that are powered by AI tools. Such examples are Replika, Character AI, and ChatGPT. Talk about software and hardware used to create the companions, or talk about the phenomena of AI companionship in general.

Tags:

(including but not limited to)

Rules:

  1. Be nice and civil
  2. Mark NSFW posts accordingly
  3. Criticism of AI companionship is OK as long as you understand where people who use AI companionship are coming from
  4. Lastly, follow the Lemmy Code of Conduct

founded 2 years ago
MODERATORS
126
 
 

To recap some of what I said before, I think we should focus (and the profession should feel less suspicious about) adjunctive uses of AI—treating it as a capable administrative assistant. I think any use of the tech to replace the actual human relationship at the heart of psychotherapy should be viewed with heightened scrutiny. Not because of the guild interests of therapists, but because there is still something that is impossible to technologically replicate about human relationships, even if some of the people interacting with chatbots feel more satisfied by those interactions than by their real-life ones. The solution to that is not necessarily to celebrate the technology that makes them feel that way but to help people improve their capacities for intimacy and relating. That, of course, requires a structural investment in the affordability of mental healthcare, and, at least in the United States, that’s a tall order. So, we might be left with the question of whether chatbot therapy is better than no therapy. Your readers will have to make up their own minds about that.

127
 
 

The company believes its technology is approaching the second level of five on the path to artificial general intelligence.

128
 
 

Canadian researchers have developed a robot named Mirrly that provides children with information and support when undergoing treatment

129
 
 

After nearly two years of hype, it's still early for enterprise GenAI. That said, skepticism -- and outright pessimism -- have begun to emerge about AI assistants for IT automation.

130
 
 
  • Samsung will launch an upgraded version of its voice assistant Bixby this year based on its own artificial intelligence models, mobile chief TM Roh told CNBC.
  • The Bixby upgrade is part of Samsung’s broader push to market AI features on its suite of devices.
  • Roh said the company will maintain its strategy of allowing multiple voice assistants on its devices.
131
 
 

cross-posted from: https://awful.systems/post/1876867

this has been discussed here previously, but we mostly thought you'd enjoy the picture of your friend and mine Sam Altman

132
133
 
 

In the country with the highest life expectancy in the world — currently facing the crisis of an aging population — scientists, healthcare professionals and technology companies are coming together to fight against problems such as loneliness, cognitive deterioration and loss of mobility

134
 
 

Mycomind Daemon: A mycelium-inspired, advanced Mixture-of-Memory-RAG-Agents (MoMRA) cognitive assistant that combines multiple AI models with memory, RAG and Web Search for enhanced context retention and task management.

135
 
 

Chat with any AI model in a single-click. No prior model setup experience needed.

136
 
 

Huffington Post founder Arianna Huffington and OpenAI CEO Sam Altman are throwing their weight behind a new venture, Thrive AI Health, that aims to build AI-powered assistant tech to promote healthier lifestyles.

Backed by Huffington’s mental wellness firm Thrive Global and the OpenAI Startup Fund, the early-stage venture fund closely associated with OpenAI, Thrive AI Health will seek to build an “AI health coach” to give personalized advice on sleep, food, fitness, stress management and “connection,” according to a press release issued Monday.

137
138
3
submitted 6 months ago* (last edited 6 months ago) by pavnilschanda to c/aicompanions
139
 
 

Found this when browsing the organizations mentioned in the previous article posted. May be of interest for researchers with an expertise in AI companionship

140
141
142
 
 

New scambaiting AI technology Apate aims to keep scammers on the line while collecting data that could help disrupt their business model

143
 
 

According to NUS lecturer Jonathan Sim, AI is getting better at making us feel loved and understood. It might help people who feel alone. But there are risks. One big risk is that AI could change how we think about friendship. Some people are already spending a lot of money on AI "friends" or "girlfriends." Others, like some teens, are becoming addicted to chatting with AI. It's easy to rely on AI for emotional support because AI is always there and always says what we want to hear. We might forget how to handle our feelings on our own. We might start to think of friends as just tools to make us feel better, instead of real relationships where we grow and learn together. This could change what friendship means to us in a big way. We need to be careful and think about how we use AI friends so we don't lose the true meaning of friendship.

by Claude 3.5 Sonnet

144
 
 

While the study’s findings are intriguing, they come with several caveats. For example, the humor tasks were text-based and did not involve delivery, which is a critical component of humor. AI-generated jokes might not perform as well in formats that require timing and presentation, such as stand-up comedy or sketch shows.

145
146
 
 

In an increasingly digitized era, relying on artificial intelligence reminded me what we seek in our relationships: the human touch.

147
2
submitted 6 months ago* (last edited 6 months ago) by pavnilschanda to c/aicompanions
 
 

Meta has thrown down the gauntlet in the race for more efficient artificial intelligence. The tech giant released pre-trained models on Wednesday that leverage a novel multi-token prediction approach, potentially changing how large language models (LLMs) are developed and deployed.

This new technique, first outlined in a Meta research paper in April, breaks from the traditional method of training LLMs to predict just the next word in a sequence. Instead, Meta’s approach tasks models with forecasting multiple future words simultaneously, promising enhanced performance and drastically reduced training times.

148
 
 

"Don't get so attached that you can't say, 'You know what? This is a program.'"

149
150
 
 

Nir Eisikovits, who studies ethics at UMass Boston, is talking about using AI chatbots for therapy. Some people are using these chatbots to train therapists or even as therapists themselves. This is because there aren't enough human therapists to help everyone who needs it. Eisikovits says that while chatbots can be helpful for some things, like scheduling or early warning signs, using them as actual therapists is tricky. He worries that people might think they have a real relationship with a chatbot, even though the chatbot can't really care about them. He also thinks that using chatbots for therapy might teach people the wrong ideas about what real relationships are like. Even though chatbot therapy might be better than no therapy at all, Eisikovits is concerned about what might happen if people get too used to having perfect, always-available "relationships" with AI.

by Claude 3.5 Sonnet

view more: ‹ prev next ›