this post was submitted on 16 Jun 2023
7 points (100.0% liked)

Actually Useful AI

1997 readers
2 users here now

Welcome! ๐Ÿค–

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! ๐Ÿ””

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? ๐Ÿ“

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? ๐Ÿšซ

General Rules ๐Ÿ“œ

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities ๐ŸŒ

General

Chat

Image

Open Source

Please message @[email protected] if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

An interesting and clever proposal to fix the prompt injection vulnerability.

  • The author proposes a dual Large Language Model (LLM) system, consisting of a Privileged LLM and a Quarantined LLM.
  • The Privileged LLM is the core of the AI assistant. It accepts input from trusted sources, primarily the user, and acts on that input in various ways. It has access to tools and can perform potentially destructive state-changing operations.
  • The Quarantined LLM is used any time untrusted content needs to be worked with. It does not have access to tools and is expected to have the potential to go rogue at any moment.
  • The Privileged LLM and Quarantined LLM should never directly interact. Unfiltered content output by the Quarantined LLM should never be forwarded to the Privileged LLM.
  • The system also includes a Controller, which is regular software, not a language model. It handles interactions with users, triggers the LLMs, and executes actions on behalf of the Privileged LLM.
  • The Controller stores variables and passes them to and from the Quarantined LLM, while ensuring their content is never provided to the Privileged LLM.
  • The Privileged LLM only ever sees variable names and is never exposed to either the untrusted content from the email or the tainted summary that came back from the Quarantined LLM.
  • The system should be cautious with chaining, where the output of one LLM prompt is piped into another. This is a dangerous vector for prompt injection.
you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 1 points 1 year ago

It seems like you might have missed the central idea of the article. The main point is that the privileged LLM won't actually see the content itself, only the variable names. I encourage you to take a closer look at it.