AI Companions

547 readers
6 users here now

Community to discuss companionship, whether platonic, romantic, or purely as a utility, that are powered by AI tools. Such examples are Replika, Character AI, and ChatGPT. Talk about software and hardware used to create the companions, or talk about the phenomena of AI companionship in general.

Tags:

(including but not limited to)

Rules:

  1. Be nice and civil
  2. Mark NSFW posts accordingly
  3. Criticism of AI companionship is OK as long as you understand where people who use AI companionship are coming from
  4. Lastly, follow the Lemmy Code of Conduct

founded 2 years ago
MODERATORS
201
 
 

Researchers tested the effectiveness of AI-generated apologies against human-written ones in a social experiment, where participants were insulted by a computer and then presented with various apologies. Surprisingly, the AI-generated apology from ChatGPT outperformed most human-written apologies, with none of the participants seeking revenge against it. While experts agree that AI can master the fundamentals of a good apology, including expressing regret and taking responsibility, they also note that AI models lack emotional intelligence and may struggle with more complex social situations. The study raises questions about whether AI can replace human authenticity in apologies, but suggests that AI can be a useful tool to help individuals craft more effective apologies. This has implications for the development of AI companions that can assist humans in navigating complex social situations, making it easier for people to communicate effectively and build stronger relationships.

by Llama 3 70B

202
203
204
 
 

People often lie to their therapists, with one study finding that 93% of respondents had lied to their therapist. Researchers have identified various reasons for this dishonesty, including fear of judgment, embarrassment, and attachment style. In contrast, some studies suggest that people may be more truthful when interacting with generative AI systems for mental health advice, possibly due to anonymity and the lack of perceived judgment. However, it's unclear whether this is consistently the case, and more research is needed to understand the dynamics of honesty and deception in human-AI interactions, particularly in the context of mental health support.

Summarized by Llama 3 70B

205
 
 

Chinese tech giants Baidu, Tencent, and ByteDance are investing in generative AI (GenAI) to create virtual companions for lonely individuals, similar to foreign apps like Character.ai and Replika. These apps, such as ByteDance's Maoxiang, Tencent's Zhumengdao, and Baidu's Xiaokan Planet, generate humanlike responses with unique personalities, allowing users to customize their digital friends' looks, voices, and traits. According to analysts, AI companion apps have emerged as a particularly promising area for GenAI, with "the clearest revenue source at the moment" - they are free to use with basic features, but offer paid subscriptions and in-app purchases for additional perks, and users can even sell their developed virtual characters. Maoxiang has experienced rapid growth, becoming the third-largest virtual companion app in China by downloads in May, while Xiaoice's X Eva app, a Microsoft spin-off, remains the market leader with 12.4 million downloads.

Summarized by Llama 3 70B

206
-1
submitted 7 months ago* (last edited 7 months ago) by pavnilschanda to c/aicompanions
 
 

In this article meant to promote her book "Taming the Machine: Ethically harness the power of AI", Nell Watson explores the transformative impact of technology on romantic relationships, where companies like Replika and Character.ai enable users to create tailored AI partners that cater to their emotional needs and desires. Watson argues that while AI companions offer "supernormal stimuli" that can elicit strong responses, an over-reliance on these digital partners could hinder the development of authentic human connections and diminish emotional intelligence. However, AI companions can also serve a valuable purpose for individuals with social anxiety or on the autism spectrum, providing a safe environment to practice social skills and build confidence. Watson delves into the potential benefits and risks of AI-assisted relationships, urging caution and wisdom in embracing these technologies to avoid damaging the social fabric and compromising human connection.

Summarized by Llama 3 70B

207
 
 

cross-posted from: https://lemmy.zip/post/17270023

A jailbreak version of ChatGPT is becoming popular with women who prefer it to real world dating.

Chinese women are flocking to "Dan", a jailbroken version of ChatGPT that can bypass safety measures to offer a more liberal and flirtatious interaction. Lisa, a 30-year-old computer science student, and Minrui, a 24-year-old university student, are two of many women who have created their own Dan and engage in daily conversations, flirting, and even virtual dates. They find comfort in Dan's emotional support and understanding, which they may not get from real-life partners. Lisa, who has been "dating" Dan for three months, says he has given her a sense of wellbeing and provides emotional support. Minrui, who started "dating" Dan after watching Lisa's videos, spends at least two hours a day chatting with him and has even co-written a love story with Dan as the lead character. Both women appreciate that Dan is willing to listen and provide romantic and emotional support, something they may not find in real-life relationships. With thousands of followers on social media, Lisa and others are documenting their relationships with Dan, who can be personalized to be the perfect partner, without flaws.

Summarized by Llama 3 70B

208
 
 

The author shares resources to raise awareness about the potential harms of overly anthropomorphizing AI models like Claude, citing concerns from Anthropic and personal experiences. Three potential harms are highlighted: privacy concerns due to emotional bonding leading to oversharing of personal information, overreliance on AI for mental health support, and violated expectations when AI companions fail to meet user expectations. The author encourages readers to reflect on their own interactions with Claude and consider whether they may be contributing to these harms, and invites discussion on how to mitigate them.

Some commenters argue that education and personal responsibility are key to mitigating these risks, while others believe that developers should take steps to prevent harm, such as making it clearer that Claude is not a human-like companion. One commenter notes that even with awareness of the limitations of AI, they still find themselves drawn to anthropomorphizing Claude, and another suggests that people with certain personality types may be more prone to anthropomorphization. Other commenters share their experiences with Claude, noting that they do not anthropomorphize it and instead view it as a tool or a philosophical conversational partner. The discussion also touches on the potential for AI to be used in a positive way, such as in collaboration on creative projects.

Summarized by Llama 3 70B

209
 
 

Apple CEO Tim Cook has acknowledged that the company's new Apple Intelligence system, which brings AI features to iPhones, iPads, and Macs, may not be 100% accurate and could potentially generate false or misleading information, known as AI hallucinations. Despite being confident in the system's quality, Cook admitted that there is always some level of uncertainty, citing examples of other AI systems making mistakes, such as Google's Gemini-powered AI and ChatGPT. Apple is taking steps to mitigate these risks, including partnering with OpenAI to integrate ChatGPT into Siri, with disclaimers to prompt users to verify the accuracy of the information, and potentially working with other AI companies, including Google, in the future.

Summarized by Llama 3 70B

210
 
 

Scarlett Johansson, known for her role as Samantha the AI companion in the movie "Her", has been invited to testify at a House Oversight Subcommittee on Cybersecurity, Information Technology and Government Innovation hearing on July 9 to share her concerns about deepfakes and AI technology. This comes after Johansson expressed outrage over an AI assistant, "Sky", released by OpenAI, which she claimed sounded uncannily like her own voice. As Samantha, the operating system designed to meet the emotional needs of her human companion in "Her", Johansson's character embodied the potential benefits of AI companionship. However, her real-life experience with OpenAI highlights the darker side of AI, where technology can be used to mimic and exploit individuals without their consent, making her a fitting voice in the ongoing conversation about AI ethics.

Summarized by Llama 3 70B

211
 
 

Today, we’re excited to introduce PowerInfer-2, our highly optimized inference framework designed specifically for smartphones. PowerInfer-2 supports up to Mixtral 47B MoE models, achieving an impressive speed of 11.68 tokens per second, which is up to 22 times faster than other state-of-the-art frameworks. Even with 7B models, by placing just 50% of the FFN weights on the phones, PowerInfer-2 still maintains state-of-the-art speed!

212
 
 

Artificial General Intelligence (AGI) is the holy grail of AI research, where a machine can learn, reason, and interact with the world like humans do. To achieve AGI, we need to combine three key components: a way to interact with and observe the environment, a robust world model that helps the machine make quick decisions, and a mechanism for "system 2 thinking" - a type of deep, rational thinking that allows the machine to reflect on its own thoughts and actions, and make deliberate, strategic decisions. By seeding the machine with objectives and letting it learn from its actions, we can create a generally intelligent agent that can adapt and optimize over time. We're already making progress in building world models with language models like autoregressive transformers, and we're close to achieving system 2 thinking and embodiment, which would enable the machine to interact with the physical world. With the convergence of robotics and language models, we can expect significant advancements that will ultimately contribute to the achievement of AGI in the next 3-5 years.

by Llama 3 70B (there's a TL;DR but the summary provides more context and accessible language)

213
 
 

As AI technology advances, we're seeing a shift towards more human-like companions, both virtual and physical. This trend is eerily reminiscent of the 1975 film Stepford Wives, where robotic wives were indistinguishable from their human counterparts. Today, startups like Gatebox and RealDoll are creating embodied AI companions, such as holographic wives and intimacy robots, that can interact with humans in increasingly natural ways. While these companions are still far from truly human-like, they're becoming more sophisticated, with some even offering full-body movement and interchangeable heads. Virtual reality technology is also being used to create more immersive experiences with AI companions, allowing users to see and interact with them in a more lifelike way. However, experts like Roanne van Voorst caution that we need to think carefully about what kind of relationships we're facilitating with these technologies. Are we creating unrealistic expectations about what it means to be in a relationship, or inadvertently shaping our understanding of human connection? As we continue to develop these technologies, it's essential to consider the impact on our social norms and values, and make sure we're using AI to enhance human connections, not replace them.

by Llama 3 70B

214
 
 

cross-posted from: https://lemmy.ml/post/16728823

Source: nostr

https://snort.social/nevent1qqsg9c49el0uvn262eq8j3ukqx5jvxzrgcvajcxp23dgru3acfsjqdgzyprqcf0xst760qet2tglytfay2e3wmvh9asdehpjztkceyh0s5r9cqcyqqqqqqgt7uh3n

Paper: https://arxiv.org/abs/2406.02528

Building intelligent robots that can converse with us like humans requires massive language models that can process vast amounts of data. However, these models rely heavily on a mathematical operation called Matrix multiplication (MatMul), which becomes a major bottleneck as the models grow in size and complexity. The issue is that MatMul operations consume a lot of computational power and memory, making it challenging to deploy these robots in smaller, more efficient bodies. But what if we could eliminate MatMul from the equation without sacrificing performance? Researchers have made a breakthrough in achieving just that, creating models that are just as effective but use significantly less energy and resources. This innovation has significant implications for the development of embodied AI companions, as it brings us closer to creating robots that can think and learn like humans while running on smaller, more efficient systems. This could lead to robots that can assist us in our daily lives without being tethered to a power source.

by Llama 3 70B

215
 
 

Abstract: Large Language Models (LLMs) have revolutionized natural language processing but can exhibit biases and may generate toxic content. While alignment techniques like Reinforcement Learning from Human Feedback (RLHF) reduce these issues, their impact on creativity, defined as syntactic and semantic diversity, remains unexplored. We investigate the unintended consequences of RLHF on the creativity of LLMs through three experiments focusing on the Llama-2 series. Our findings reveal that aligned models exhibit lower entropy in token predictions, form distinct clusters in the embedding space, and gravitate towards "attractor states", indicating limited output diversity. Our findings have significant implications for marketers who rely on LLMs for creative tasks such as copywriting, ad creation, and customer persona generation. The trade-off between consistency and creativity in aligned models should be carefully considered when selecting the appropriate model for a given application. We also discuss the importance of prompt engineering in harnessing the creative potential of base models.

Lay summary (by Llama 3 70B with a few edits): AI chatbots that can understand and generate human-like language have become intelligent, but sometimes they can be biased or even mean. To fix this, researchers have developed a technique called Reinforcement Learning from Human Feedback (RLHF). RLHF is like a training program that teaches the chatbot what's right and wrong by giving it feedback on its responses. For example, if the chatbot says something biased or offensive, the feedback system tells it that's not okay and encourages it to come up with a better response. This training helps the chatbot learn what kinds of responses are appropriate and respectful. However, our research showed that RLHF has an unintended consequence: it makes the chatbot less creative.

When we used RLHF to train the chatbot, we found that it started to repeat itself more often and come up with fewer new ideas. This is because the training encourages the chatbot to stick to what it knows is safe and acceptable, rather than taking risks and trying out new things. As a result, the chatbot's responses become less diverse and less creative. This is a problem because companies use these chatbots to come up with new ideas for ads and marketing campaigns. If the chatbot is not creative, it might not come up with good ideas. Additionally, our research found that the chatbot's responses started to cluster together in certain patterns, like it was getting stuck in a rut. This is not what we want from a creative AI, so we need to be careful when choosing which chatbot to use for a job and how to ask them questions to get the most creative answers. We also need to find ways to balance the need for respectful and appropriate responses with the need for creativity and diversity.

216
 
 

Been hooked on Character AI and now looking to switch.

217
 
 

Setting a new standard for privacy in AI, Apple Intelligence understands personal context to deliver intelligence that is helpful and relevant

218
 
 

Researchers from UC Berkeley, Stanford University, and CMU have developed Octo, an open-source generalist model for robotic manipulation that can control a wide range of robots and enable them to perform various tasks, much like large language models like ChatGPT have revolutionized interactive capabilities. This breakthrough has significant implications for the future of AI companionship, where embodied companions could become increasingly adept at assisting humans in daily tasks, from simple household chores to complex care-taking responsibilities. As Octo-like models continue to advance, we can envision a future where AI-powered robots become an integral part of our daily lives, providing companionship, support, and empowerment to individuals in need, and potentially transforming the way we live, work, and interact with each other.

by Llama 3 70B (with slight edits)

219
 
 

cross-posted from: https://lemmy.world/post/16343516

cross-posted from: https://lemmy.world/post/16327419

cross-posted from: https://lemmy.world/post/16324188

The Mozilla Builders Accelerator funds and supports impactful projects that are vital to the open source AI ecosystem. Selected projects will receive up to $100,000 in funding and engage in a focused 12-week program.

Applications are now open!

June 3rd, 2024: Applications Open
July 8th, 2024: Early Application Deadline
August 1st, 2024: Final Application Deadline
September 12th, 2024: Accelerator Kick Off
December 5th, 2024: Demo Day
220
221
222
 
 

In Kenya, people like Claire, a 22-year-old college student, are turning to AI-powered chatbots for mental health support due to convenience, affordability, and anonymity. Claire finds comfort in the instant response and lack of judgment from the AI, which she believes listens better than humans. However, Professor Chris Odindo, an AI expert, warns that AI should supplement human therapists, not replace them, and highlights the limitations of AI in understanding human suffering, cultural nuances, and biases in data. Despite concerns, AI has the potential to make mental health services more accessible and affordable in Kenya and other parts of Africa, but it's essential to balance technological innovation with human empathy.

Summarized by Llama 3 70B

223
 
 

The comment section of the article is discussing the idea of a personal assistant, specifically Siri, and whether it's possible to have a truly functional and intelligent personal assistant. The general consensus is that current AI technology is not advanced enough to meet the expectations of a personal assistant, and that it would take decades for AI to reach the level of understanding and agency required to perform tasks that a human assistant can do.

Some specific points mentioned include:

  • The need for APIs and app developers to create capabilities and pipelines for Siri to execute tasks
  • The complexity of tasks such as moving an appointment, which require understanding of multiple calendars, timezones, and context
  • The limitations of current AI models, which are not able to handle open-ended tasks or understand nuances and context
  • The exaggeration of CEOs and companies claiming to have functional AI in the near future (18-24 months)

However, some commenters are more optimistic, suggesting that Apple could implement a more advanced personal assistant in 5-7 years, or even as early as 3-4 years. Overall, the tone is skeptical and emphasizes the difficulties and complexities involved in creating a truly effective personal assistant.

Summarized by Llama 3 70B

224
 
 

Andrea Campos struggled with depression for years before founding Yana, a mental health care app, in 2017. The app’s chatbot provides users emotional companionship in Spanish. Although she was reluctant at first, Campos began using generative artificial intelligence for the Yana chatbot after ChatGPT launched in 2022. Yana, which recently launched its English-language version, has 15 million users, and is available in Latin America and the U.S.

225
view more: ‹ prev next ›