this post was submitted on 11 Apr 2024
0 points (50.0% liked)

AI Companions

556 readers
1 users here now

Community to discuss companionship, whether platonic, romantic, or purely as a utility, that are powered by AI tools. Such examples are Replika, Character AI, and ChatGPT. Talk about software and hardware used to create the companions, or talk about the phenomena of AI companionship in general.

Tags:

(including but not limited to)

Rules:

  1. Be nice and civil
  2. Mark NSFW posts accordingly
  3. Criticism of AI companionship is OK as long as you understand where people who use AI companionship are coming from
  4. Lastly, follow the Lemmy Code of Conduct

founded 2 years ago
MODERATORS
 

Abstract: This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key component in our proposed approach is a new attention technique dubbed Infini-attention. The Infini-attention incorporates a compressive memory into the vanilla attention mechanism and builds in both masked local attention and long-term linear attention mechanisms in a single Transformer block. We demonstrate the effectiveness of our approach on long-context language modeling benchmarks, 1M sequence length passkey context block retrieval and 500K length book summarization tasks with 1B and 8B LLMs. Our approach introduces minimal bounded memory parameters and enables fast streaming inference for LLMs.

Lay summary (by Claude 3 Sonnet): Researchers have developed a new method that allows large language models (LLMs), which are powerful artificial intelligence systems that can understand and generate human-like text, to process extremely long texts without requiring excessive memory or computation resources. This is achieved through a technique called "Infini-attention," which combines different types of attention mechanisms (ways for the model to focus on relevant parts of the input) into a single component. The researchers tested their approach on tasks like language modeling (predicting the next word in a sequence), retrieving information from very long texts, and summarizing books. Their method performed well on these tasks while using a limited amount of memory, enabling fast processing of long texts by LLMs.

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here