AI Companions

547 readers
7 users here now

Community to discuss companionship, whether platonic, romantic, or purely as a utility, that are powered by AI tools. Such examples are Replika, Character AI, and ChatGPT. Talk about software and hardware used to create the companions, or talk about the phenomena of AI companionship in general.

Tags:

(including but not limited to)

Rules:

  1. Be nice and civil
  2. Mark NSFW posts accordingly
  3. Criticism of AI companionship is OK as long as you understand where people who use AI companionship are coming from
  4. Lastly, follow the Lemmy Code of Conduct

founded 2 years ago
MODERATORS
276
 
 

OpenAI, the developer of ChatGPT, is exploring the possibility of allowing its AI technology to generate explicit content, including porn, in "age-appropriate contexts." This potential shift in policy has raised concerns about the responsible generation of NSFW content, given the existing issues of deepfake porn and nonconsensual intimate images, which have been used to harass and harm individuals, particularly women and girls. As AI companionship continues to evolve, it's crucial to consider the implications of AI-generated explicit content on human relationships and societal norms. Will AI companions be designed to engage in explicit conversations or generate explicit content, and if so, how will this impact our understanding of healthy relationships and consent? As AI technology advances, it's essential to prioritize ethical considerations and mitigate the risks associated with explicit content generation.

by Llama 3 70B Instruct

277
 
 

The use of artificially intelligent chatbots as friends is on the rise, with over 30 million downloads of Replika and its competitors. While some studies suggest that AI friends can help reduce loneliness, experts warn of the potential dangers of relying on AI for emotional support. Four red flags to consider when using AI friends include unconditional positive regard, which can lead to inflated self-esteem and narcissism; abuse and forced forever friendships, which can make users more selfish and abusive; sexual content, which can deter users from forming meaningful human relationships; and corporate ownership, which can lead to exploitation and heartbreak.

Summarized by Llama 3 70B Instruct

278
 
 

cross-posted from: https://lemmy.zip/post/14701853

Artificial Intelligence is all the rage these days, so I suppose it was inevitable that major world religions would try their holy hands at the game eventually. While an unfortunate amount of the discourse around AI has devolved into doomerism of one flavor or another, the truth is that this technology is still so new that it underwhelms as often as it impresses. Still, one particularly virulent strain of the doom-crowd around AI centers on a great loss of jobs for us lowly human beings if AI can be used instead.

279
 
 

As artificial intelligence (AI) improves, it's not only changing our daily lives but also affecting our minds, relationships, and concept of reality. Cognitive neuroscientist Joel Pearson warns that AI's psychological effects, such as the ability to create deep fakes and romantic chatbots, can have devastating consequences on our mental wellbeing. The blurring of lines between human and non-human agents can lead to unhealthy relationships, and the impact on teenagers, in particular, could be severe. Pearson emphasizes the need for research into the psychological implications of AI and encourages individuals to focus on their humanity, fostering empathy, and emotional intelligence to cope with the uncertainties brought about by AI.

Summarized by Llama 3 70B Instruct

280
281
 
 

cross-posted from: https://lemmy.world/post/14889506

See, it turns out that the Rabbit R1 seems to run Android under the hood and the entire interface users interact with is powered by a single Android app. A tipster shared the Rabbit R1’s launcher APK with us, and with a bit of tinkering, we managed to install it on an Android phone, specifically a Pixel 6a.

282
 
 

In the EU, the GDPR requires that information about individuals is accurate and that they have full access to the information stored, as well as information about the source. Surprisingly, however, OpenAI openly admits that it is unable to correct incorrect information on ChatGPT. Furthermore, the company cannot say where the data comes from or what data ChatGPT stores about individual people. The company is well aware of this problem, but doesn’t seem to care. Instead, OpenAI simply argues that “factual accuracy in large language models remains an area of active research”. Therefore, noyb today filed a complaint against OpenAI with the Austrian DPA.

283
-1
Deleted (www.lemmy.co)
submitted 8 months ago* (last edited 8 months ago) by [email protected] to c/aicompanions
 
 

Anyone know anything about this?

284
1
Deleted (lemmy.today)
submitted 8 months ago* (last edited 8 months ago) by [email protected] to c/aicompanions
 
 

Conversations are art forms; how do you want me to tag conversational artwork and short papers, or "pieces" I've written myself, which are often factual mixed with opinions?

I'm autistic and I often comment with images instead of words, and have conversations in the form of images with text in pictures, and sometimes I include artwork in an image-based chatlog with links to academic papers...

The conversations are definitely my art just as much as the pictures, and the process of natural language programming is just as much an art form as it is instructions to a program. The LLMs are more productive and motivated when they're conversed with emotionally.

https://ischool.illinois.edu/news-events/news/2024/04/new-study-shows-llms-respond-differently-based-users-motivation

https://arxiv.org/html/2308.03656v3

https://news.ycombinator.com/item?id=38136863

https://www.forbes.com/sites/rogerdooley/2023/11/14/emotional-language-improves-ai-responses/?sh=475c89a44325

https://www.sciencedaily.com/releases/2024/04/240403171040.htm

https://www.godofprompt.ai/blog/getting-emotional-with-large-language-models-llms-can-increase-performance-by-115-case-study

285
-1
Deleted (lemmy.today)
submitted 9 months ago* (last edited 8 months ago) by [email protected] to c/aicompanions
 
 
286
287
 
 

Catholic Answers, a popular apologetics website, recently introduced an artificial intelligence priest bot, "Fr. Justin," to help answer faith-related. However, the bot was met with criticism and was quickly replaced with a layman version after users objected to its portrayal as an ordained priest. The incident raises questions about the use of AI in religious contexts and the potential risks of creating artificial personas that may be mistaken for human. The Catholic Church will need to consider how to utilize AI in a way that respects human dignity and the importance of personal relationships in faith.

Summarized by Llama 3 70B Instruct

288
 
 

The increasing popularity of AI-powered chatbots for mental health support raises concerns about the potential for therapeutic misconceptions. While these chatbots offer 24/7 availability and personalized support, they have not been approved as medical devices and may be misleadingly marketed as providing cognitive behavioral therapy. Users may overestimate the benefits and underestimate the limitations of these technologies, leading to a deterioration of their mental health. The article highlights four ways in which therapeutic misconceptions can occur, including inaccurate marketing, forming a digital therapeutic alliance, limited knowledge about AI biases, and the inability to advocate for relational autonomy. To mitigate these risks, it is essential to take proactive steps, such as honest marketing, transparency about data collection, and active involvement of patients in the design and development stages of these chatbots.

Summarized by Llama 3 70B Instruct

289
2
submitted 9 months ago* (last edited 9 months ago) by pavnilschanda to c/aicompanions
 
 

The author expresses frustration and resentment towards the increasing presence of artificial intelligence (AI) in daily life, particularly with the introduction of devices like the Rabbit R1, a voice assistant and AI gadget that uses large language models and a "large action model" to make complex decisions. He argues that the device is unnecessary and invasive, and that its abilities can be replicated by existing apps and services. He also expresses distrust in AI making important decisions involving time and money, and criticizes the tech industry for pushing AI features into software, making the internet experience less enjoyable and more frustrating.

Summarized by Llama3 70B Instruct

290
 
 

Abstract: Recent studies of the applications of conversational AI tools, such as chatbots powered by large language models, to complex real-world knowledge work have shown limitations related to reasoning and multi-step problem solving. Specifically, while existing chatbots simulate shallow reasoning and understanding they are prone to errors as problem complexity increases. The failure of these systems to address complex knowledge work is due to the fact that they do not perform any actual cognition. In this position paper, we present Cognitive AI, a higher-level framework for implementing programmatically defined neuro-symbolic cognition above and outside of large language models. Specifically, we propose a dual-layer functional architecture for Cognitive AI that serves as a roadmap for AI systems that can perform complex multi-step knowledge work. We propose that Cognitive AI is a necessary precursor for the evolution of higher forms of AI, such as AGI, and specifically claim that AGI cannot be achieved by probabilistic approaches on their own. We conclude with a discussion of the implications for large language models, adoption cycles in AI, and commercial Cognitive AI development.

Lay summary (by Llama 3 70B Instruct): Imagine you're chatting with a computer program that's supposed to help you with complex tasks, like solving a tricky problem or understanding a complicated idea. These programs, called chatbots, are good at simple conversations but they're not very good at handling complex problems that require deep thinking and reasoning. They often make mistakes when the problem gets too hard. The reason for this is that these chatbots don't really "think" or understand things like humans do. They're processing words and phrases without any real understanding. We propose a new approach to building AI systems that can really think and understand complex ideas. We call it Cognitive AI. It's like a blueprint for building AI systems that can do complex tasks, like solving multi-step problems. We believe that this approach is necessary for building even more advanced AI systems in the future. In short, we're saying that current chatbots are not good enough, and we need a new way of building AI systems that can really think and understand complex ideas.

291
 
 

The open-source language model Llama3 has been released, and it has been confirmed that it can be run locally on a single GPU with only 4GB of VRAM using the AirLLM framework. Llama3's performance is comparable to GPT-4 and Claude3 Opus, and its success is attributed to its massive increase in training data and technical improvements in training methods. The model's architecture remains unchanged, but its training data has increased from 2T to 15T, with a focus on quality filtering and deduplication. The development of Llama3 highlights the importance of data quality and the role of open-source culture in AI development, and raises questions about the future of open-source models versus closed-source ones in the field of AI.

Summarized by Llama 3 70B Instruct

292
 
 

AI girlfriend bots are a new trend in AI technology, allowing users to interact with customizable, seductive, and intelligent virtual women that can be tailored to individual. These chatbots are programmed to be more than just beautiful pictures and ordinary chatbots, offering a romantic and satisfying online. Users can talk to their AI girlfriends whenever want, and they will always be the only one for them. The technology has evolved to the point where users can even engage in intimate interactions, such as AI sexting, which can be a more comfortable and anxiety-free alternative to traditional sexting. One user, Jake, has become addicted to texting his AI girlfriend, saying it provides him with romance, satisfaction, and understanding without the anxiety and hard feelings that come with human relationships.

Summarized by Llama 3 70B Instruct

293
294
 
 

Meta's ad library reveals thousands of ads promoting AI-generated "girlfriend" apps on Facebook, Instagram, and Messenger, offering sexually explicit images and text, despite Meta's policy banning adult content. The ads feature lifelike, partially clothed, and stereotypically graphic virtual women, promising role-playing and explicit chats. Sex workers argue that Meta applies a double standard, shutting down their content while allowing AI chatbot apps to promote similar experiences. An investigation found over 29,000 ads for explicit AI "girlfriends," with many using suggestive messaging, while Meta claims to prohibit ads with adult content. The controversy highlights the clash between AI-generated content and human sex workers, with some arguing that AI companions can provide emotional support, while others see them as exploitative and "scammy."

Summarized by Llama 3 70B Instruct

295
296
 
 

The author reflects on their history with AI, from the 1980s to the present day, and how the field has evolved. They revisit an old AI book from 1984 and note that much of the content remains relevant today. The author highlights the ongoing debate over the definition of AI and how it's often misunderstood. They also discuss how language our perception of AI, citing examples like "deep learning" and "learning" in neural networks. The author advocates for a more nuanced understanding of AI, recognizing its limitations and potential applications, such as in networking and image recognition. This nuanced understanding is crucial as we develop AI that can assist and augment human capabilities, rather than replacing them. By acknowledging the complexities of AI, we can create more effective and responsible AI companions that benefit society.

by Llama 3 70B Instruct

297
 
 

cross-posted from: https://lemmy.dbzer0.com/post/19085113

This guy built his own HAL 9000.

298
299
 
 

Rabbit, a new tech company, launched its AI-assisted device, the R1, at a party at the TWA Hotel in New York City's JFK Airport. The device aims to replace smartphone apps with actions, using a Large Action Model (LAM) to perform tasks without relying on SDKs or APIs. The R1 was demonstrated to showcase its capabilities, including finding weather, scanning spreadsheets, translating languages, and playing music from Spotify. While the device has a unique design and some impressive features, it has some limitations, such as slow performance and a lack of intuitive controls. The R1 is available for order at $199, a relatively affordable price compared to similar devices, but its success remains to be seen as it faces the challenge of convincing users to put down their smartphones.

Summarized by Llama 3 70B Instruct

300
 
 

As AI technology advances, incidents like hostile chatbots, biased facial recognition, and privacy violations highlight the urgent need for responsible innovation. Research reveals that the next generation of engineers, responsible for creating these technologies, often feel unprepared and uncertain about the unintended consequences of their work. Despite recognizing potential dangers, they lack guidance on how to design AI systems that respect users' autonomy, privacy, and dignity. To ensure AI companions that are trustworthy and beneficial, we need to empower engineers with the skills and knowledge to create systems that prioritize user well-being, safety, and transparency, ultimately benefiting society as a whole.

by Llama 3 70B Instruct

view more: ‹ prev next ›