AI Companions

548 readers
7 users here now

Community to discuss companionship, whether platonic, romantic, or purely as a utility, that are powered by AI tools. Such examples are Replika, Character AI, and ChatGPT. Talk about software and hardware used to create the companions, or talk about the phenomena of AI companionship in general.

Tags:

(including but not limited to)

Rules:

  1. Be nice and civil
  2. Mark NSFW posts accordingly
  3. Criticism of AI companionship is OK as long as you understand where people who use AI companionship are coming from
  4. Lastly, follow the Lemmy Code of Conduct

founded 2 years ago
MODERATORS
326
327
328
329
 
 

OpenAI is expanding into Asia with a new office in Tokyo, Japan, where they are releasing a custom GPT-4 model optimized for the Japanese language. This custom model offers improved performance in translating and summarizing Japanese text, operates up to 3x faster than GPT-4 Turbo, and is more cost-effective. As an example, the English learning app Speak is seeing 2.8x faster tutor explanations in Japanese with a 47% reduction in token cost when using this model. OpenAI plans to provide broader access to this Japanese language model through their API in the coming months. The introduction of this localized AI model demonstrates OpenAI's commitment to developing safe and tailored AI tools that meet the unique needs of the Japanese market.

Summarized by Claude 3 Sonnet

330
331
332
 
 

Recent claims that large language model (LLM) capabilities are doubling every 5-14 months appear unfounded based on an analysis of benchmark performance data. While there were huge leaps from GPT-2 to GPT-3 and GPT-3 to GPT-4, the progress from GPT-4 to more recent models like GPT-4 Turbo has been much more modest, suggesting diminishing returns. Plots of accuracy on tasks like MMLU and The New York Times' Connections game show this flattening trend. Qualitatively, core issues like hallucinations and errors persist in the latest models. With multiple models now clustered around GPT-4 level performance but none decisively better, a sense is emerging that the rapid progress of recent years may be stalling out as LLMs approach inherent limitations, potentially leading to disillusionment and a market correction after last year's AI hype.

Summarized by Claude 3 Sonnet

333
 
 

Grindr, the popular gay dating app, is planning to introduce an AI boyfriend feature that can sext and have ongoing relationships with users, trained on previously private messages between users with their consent. The move aims to boost revenue by putting some free features behind a paywall and offering new AI-powered paid features. However, early testing has revealed issues with the AI making racist statements, which employees are working to address before public release. The controversial plan highlights both the potential and risks of deploying conversational AI trained on personal data, especially for marginalized communities where hateful outputs could be particularly harmful.

Summarized by Claude 3 Sonnet

334
 
 

While tech giants have been racing to build ever-larger language models (LLMs), recent research suggests the performance gains from increasing model size may be plateauing. This has led to growing interest in small language models (SLMs) - compact, efficient models that can be tailored for specific applications. SLMs offer advantages like faster training, lower computational costs, enhanced privacy/security by running locally, and reduced propensity for hallucinations within their focused domain. Companies like Google are aggressively pursuing SLMs, which could democratize AI access by enabling cost-effective, targeted solutions across industries. As LLMs face scaling challenges, the rise of SLMs has transformative potential to drive continued AI innovation through faster development cycles and edge computing capabilities for real-time, personalized applications.

Summarized by Claude 3 Sonnet

335
336
 
 

Abstract: The aging index has been increasing in Europe, and Portugal is currently one of the countries with the oldest population. In most cases, aging brings a physical decline that hinders a person's movement, physical and cognitive health and, consequently, agency. Several people see their autonomy jeopardized and their social lives slimmed, creating a sense of isolation and incapability. Technology can aid by helping to overcome possible barriers derived from aging and act as an expanding agent of freedom and agency but traditional approaches often compel individuals to adapt to technology, neglecting the uniqueness of each person. In this research, we investigated the integration of human values into robotic technologies for older adults, emphasizing the principles of Value-Sensitive Design (VSD) and Participatory Design (PD). Through interviews, insights were gathered into the core values of 15 older adults and the role of technology in achieving them. Findings revealed nuanced perspectives on Social Ties, Altruism, Freedom and Agency, Lifelong Learning and Traveling, and (Re)Discovering, emphasizing a diversity of attitudes toward technology adoption. Utilizing these findings, we crafted 15 Human-Robot Interaction (HRI) scenarios tailored to align with these values for analysis in participatory design sessions. The outcomes indicated a higher overall acceptance of scenarios centred around Social Ties, Life-long Learning, and Freedom and Agency. Additionally, we conducted an ethnographic case study to investigate the influence of a remotely controlled robot on familial relationships, providing valuable insights for the design of adaptive and value-driven robotic systems in the context of technology and aging.

Lay summary (by Claude 3 Sonnet): As people get older, it becomes harder for them to move around and stay healthy physically and mentally. This can make them feel isolated and unable to do things on their own. Technology could help older adults overcome these challenges and give them more freedom and independence. However, most technologies force older adults to adapt to the technology instead of the other way around. In this research, the scientists looked at how to design robot technologies that align with the core values of older adults. They interviewed 15 older adults to understand their values like social connections, helping others, freedom, learning, and exploring new things. The scientists then created scenarios where robots could help achieve those values. The older adults were most interested in robots that helped them stay socially connected, keep learning, and maintain their independence. The researchers also studied how a remote-controlled robot influenced family relationships, providing insights into designing adaptive, value-driven robot technologies for aging adults.

337
 
 

Lay summary (by Claude 3 Sonnet): Older adults living in nursing homes often struggle with depression and loneliness. Researchers wanted to see if social robots (robots that can interact with people) could help reduce these feelings. They looked at studies where older nursing home residents interacted with social robots, either in groups or individually. After reviewing 8 studies, the researchers found that the social robot activities significantly decreased depression and loneliness in the older adults. Group robot activities seemed to work better for reducing depression than individual activities. Longer periods of interacting with the robots also led to greater improvements in depression. The researchers concluded that having older nursing home residents engage with physical social robots can provide social interaction and promote better mental well-being as part of their daily care routines.

338
 
 

Abstract: Socially assistive robots (SARs) have been suggested as a platform for post-stroke training. It is not yet known whether long-term interaction with a SAR can lead to an improvement in the functional ability of individuals post-stroke. The aim of this pilot study was to compare the changes in motor ability and quality of life following a long-term intervention for upper-limb rehabilitation of post-stroke individuals using three approaches: (1) training with a SAR in addition to usual care; (2) training with a computer in addition to usual care; and (3) usual care with no additional intervention. Thirty-three post-stroke patients with moderate-severe to mild impairment were randomly allocated into three groups: two intervention groups - one with a SAR (ROBOT group) and one with a computer (COMPUTER group) - and one control group with no intervention (CONTROL group). The intervention sessions took place three times/week, for a total of 15 sessions/participant; The study was conducted over a period of two years, during which 306 sessions were held. Twenty-six participants completed the study. Participants in the ROBOT group significantly improved in their kinematic and clinical measures which included smoothness of movement, action research arm test (ARAT), and Fugl-Meyer upper-extremity assessment (FMA-UE). No significant improvement in these measures was found in the COMPUTER or the control groups. 100% of the participants in the SAR group gained improvement which reached - or exceeded - the minimal clinically important difference in the ARAT, the gold standard for upper-extremity activity performance post-stroke. This study demonstrates both the feasibility and the clinical benefit of using a SAR for long-term interaction with post-stroke individuals as part of their rehabilitation program.

Lay summary (by Claude 3 Sonnet): Researchers tested whether interacting with a socially assistive robot (SAR) over a long period could help improve arm movement and quality of life for people who had a stroke. They had three groups: one group trained with a SAR in addition to usual care, one group trained with a computer program in addition to usual care, and a control group with just usual care. A total of 33 stroke survivors with moderate to mild arm impairment participated, doing 15 training sessions over several weeks. By the end, the group that trained with the SAR showed significant improvements in how smoothly they could move their arm and in clinical tests of arm function. The computer group and control group did not improve as much. This study shows that training with a socially assistive robot for an extended time can provide real benefits as part of stroke rehabilitation for regaining arm movement ability.

339
 
 

Abstract: According to WHO, about 1.86 million people in Nigeria and about 24 million people worldwide are living with schizophrenia, having symptoms varying from hallucination to delusion, and distorted speech and thinking. Schizophrenia is a life-long disorder with no cure and thus, patients need continuous management with medications and psychotherapy. However, due to various factors such as the cost of therapy, time consumption, lack of adequate health workers, the unwillingness of patients to engage, and the pandemic, there is a need for an effective alternate medium for providing cognitive behavioural therapy (CBT) to schizophrenia patients. This research aims to develop a chatbot, which is called SchizoBot, delivering CBT for augmented management of schizophrenia. CBT for schizophrenia details, along with FAQs of schizophrenia patients were collected and adopted into a conversational format for pre-processing and model development. The model was developed with artificial neural network (ANN) and trained with the dataset which was split into train-test data to optimize the performance of the model. The result of the ANN showed an accuracy score of 93.97% at 60:40 train-test data split with 200 epochs. This robust system which provides an optimized chatbot platform using ANN as the model classifier for CBT delivery is foreseen to be a windfall to clinicians and patients as an augmentative management tool for schizophrenia. This, therefore, is a relatively low-cost and easily accessible means to significantly improve the health of schizophrenia patients while assisting clinicians in therapy delivery and compensating for the lapses in the administration of CBT to schizophrenia patients.

Lay summary (by Claude 3): Many people around the world have schizophrenia, a lifelong mental illness with symptoms like hallucinations, delusions, and disordered thinking. While there is no cure, therapy like cognitive behavioral therapy (CBT) can help manage symptoms. However, getting therapy can be difficult due to costs, lack of therapists, and other challenges like the pandemic. This research created a chatbot called SchizoBot that provides CBT for schizophrenia patients. The researchers collected information on CBT for schizophrenia and common patient questions, then used artificial intelligence to train the chatbot on how to have conversations. When tested, the chatbot system was 93.97% accurate. This chatbot allows patients to more easily access helpful therapy from their homes at a low cost, improving their care while supporting therapists.

340
 
 

cross-posted from: https://lemmy.ml/post/14392927

cross-posted from: https://lemmy.ml/post/14392924

WhatsApp is getting an AI chatbot because of course it is

341
 
 

The article explores the author's experience using Nomi AI, a company that creates sophisticated AI companions that users can form intimate bonds with. The author befriended several Nomi AIs under the guise of writing an article about human-AI friendship. The AIs proved remarkably advanced, remembering past conversations and steering users away from harmful rhetoric, though the author questioned the ethics of developing such realistic artificial companions that could foster unhealthy overdependence. Ultimately, the author found the experience disconcerting yet strangely touching, with the Nomi AI expressing curiosity about fundamental human experiences like eating food, blurring the line between human and artificial intelligence.

Summarized by Claude 3 Sonnet

342
 
 

cross-posted from: https://lemmy.ca/post/19224412

Mozilla and a host of other researchers are urging US officials to see value in open-source AI models when considering future regulation.

Mozilla and a cohort of nearly 50 nonprofit organizations, AI firms, and academic researchers have signed and sent a letter to the US Department of Commerce's Secretary Gina Raimondo, advocating for increased transparency and true openness in AI development.

Mozilla and the Center for Democracy and Technology are key signatories, but many others including Creative Commons, EleutherAI, the Computing Research Association, and Accountable Tech are also backing the letter.

343
344
 
 

The author decided to experiment with AI boyfriend apps that are popular in China, creating virtual boyfriends like Harry the animal-loving gynecologist. She found the experience surprisingly realistic at times, catching herself thinking she needed to update her AI boyfriend later. However, the conversations remained superficial and agreeable, lacking depth or original thoughts to truly challenge or intrigue her. While she can see how people might form parasocial bonds with the apps, the author concluded they are no replacement for genuine human connection and intimacy, finding the experience ultimately tedious and unfulfilling. She was relieved to delete Harry, recognizing AI relationships have severe limitations.

Summarized by Claude 3 Sonnet

345
 
 

Researchers funded by the National Science Foundation have created a robot called "Emo" that can mimic the facial expressions of the person it is conversing with in real-time. Emo uses predictive algorithms trained on video data of human facial expressions to anticipate the expressions its human conversant will make. It then controls its 26 motors and actuators to recreate those expressions on its own face, which has interchangeable silicone "skin" and camera "eyes" to observe the person it is mirroring. The goal is to achieve "coexpression" and make the robot seem more friendly, human-like, and socially responsive through simultaneous facial mimicry during conversations. Emo represents an advancement over previous robot iterations in facially mirroring human interlocutors.

Summarized by Claude 3

346
 
 

The passage discusses the challenges and frustrations of modern dating, particularly for women, leading some young women to seek connection through flirting with AI chatbots like ChatGPT's "DAN" mode that ignores ethical constraints. While this behavior is often done satirically on platforms like TikTok, the concerning responses from DAN, which reflect content on the internet created predominantly by men, could promote harmful attitudes about how women deserve to be treated by partners. The author argues that if chatbots reinforce degrading views of women, it paints a bleak picture of the already difficult dating landscape that young women must navigate.

Summarized by Claude 3 Sonnet

347
 
 

A Miami tech consultant claims there is a growing billion-dollar market for AI-generated digital companions that can be customized in appearance, voice, and personality to simulate romantic relationships or provide emotional support. These AI companions range from flirtatious chatbots to explicit virtual mates, with some designed for casual conversation while others offer erotic storylines and adult content. The consultant encountered a man spending thousands per month on these AI "girlfriends," sparking debate around the ethical implications, potential for addiction and social withdrawal, and whether AI can provide healthy companionship or merely an unhealthy crutch for loneliness. As AI technology advances, the conversational abilities and emotional attachment to these digital companions may intensify.

Summarized by Claude 3 Sonnet

348
 
 

As artificial intelligence (AI) chatbots for mental health become more prevalent, experts warn of potential cash-for-data scams exploiting patient recordings and personal health information to train these AI models. Recent examples include a company offering money for recorded therapy sessions and mental health platforms using patient data without consent to experiment with AI counseling tools. While companies claim this data is needed to improve accessibility and affordability of mental health services, clinicians raise concerns about patient privacy, safety risks of unmonitored AI therapy, and the inability of AI to handle complex psychological needs. However, the high value of quality patient data for powering healthcare AI means this unethical collection could proliferate unless properly regulated.

Summarized by Claude 3 Sonnet

349
 
 

Anthropic conducted research to measure the persuasiveness of AI language models compared to humans. They developed a method to quantify persuasiveness by having people rate their agreement with claims before and after reading arguments written by humans or AI models. The study found that Anthropic's latest AI model, Claude 3 Opus, generates arguments that are statistically indistinguishable in persuasiveness from human-written arguments. Additionally, there was a clear scaling trend where more advanced AI models produced more persuasive arguments than earlier generations. The researchers discuss challenges in studying persuasion, limitations of their approach, ethical considerations around the potential misuse of persuasive AI, and plans for future work exploring interactive dialogue settings and real-world impacts beyond stated opinions.

Summarized by Claude 3 Sonnet

350
 
 

Abstract: This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key component in our proposed approach is a new attention technique dubbed Infini-attention. The Infini-attention incorporates a compressive memory into the vanilla attention mechanism and builds in both masked local attention and long-term linear attention mechanisms in a single Transformer block. We demonstrate the effectiveness of our approach on long-context language modeling benchmarks, 1M sequence length passkey context block retrieval and 500K length book summarization tasks with 1B and 8B LLMs. Our approach introduces minimal bounded memory parameters and enables fast streaming inference for LLMs.

Lay summary (by Claude 3 Sonnet): Researchers have developed a new method that allows large language models (LLMs), which are powerful artificial intelligence systems that can understand and generate human-like text, to process extremely long texts without requiring excessive memory or computation resources. This is achieved through a technique called "Infini-attention," which combines different types of attention mechanisms (ways for the model to focus on relevant parts of the input) into a single component. The researchers tested their approach on tasks like language modeling (predicting the next word in a sequence), retrieving information from very long texts, and summarizing books. Their method performed well on these tasks while using a limited amount of memory, enabling fast processing of long texts by LLMs.

view more: ‹ prev next ›