this post was submitted on 15 Jun 2023
2 points (100.0% liked)

Generative Artificial Intelligence

206 readers
1 users here now

Welcome to the Generative AI community on Lemmy! This is a place where you can share and discuss anything related to generative AI, which is a kind of technology that can make new things, like pictures, words, or sounds, by learning from existing things. You can post your own creations, ask for feedback, share resources, or just chat with other fans. Whether you are a beginner or an expert, you are welcome here. Please follow the Lemmy etiquette and be respectful to each other. Have fun and enjoy the magic of generative AI!

P.s. Every aspect of this community was created with AI tools, isn't that nifty.

founded 1 year ago
MODERATORS
 

Thoughts? Ideas? How do we align these systems, some food for thought; when we have these systems do chain of reasoning or various methods of logically going through problems and coming to conclusions we've found that they are telling "lies" about their method, they follow no logic even if their stated logic is coherent and makes sense.

Here's the study I'm poorly explaining, read that instead. https://arxiv.org/abs/2305.04388

top 4 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 1 year ago (1 children)

When you ask something to a generative written language AI, it will give an output. When asked later to explain it, it won't go trough where it found it but instead will read again their previous answer and try to explain it with a now broken context.

It also doesn't help that some AIs like OpenAI's ChatGPT will just agree if you try to "correct" it with blatant lies.

[–] [email protected] 1 points 1 year ago (1 children)

Well even if you have it explain in parallel to the answer the explanation is false (as in not matching with it's "internal reasoning") , from the perspective of a language model the conversation isn't even separate moments in time, it's responses and yours are all part of the same text dump that it's appending likely text to the end of.

[–] [email protected] 1 points 1 year ago

Its more like a student trying to bullshit his way out of a school oral test for not studying.

But what can be done about it? I am sure the AI actually doesn't understand why it has written the thing it did, there would be the need of a secondary component that saves and pieces together the neural activity of the main AI, or something that compares it to the original dataset (but then it would just be a fancier search engine like Bing's)

[–] [email protected] 1 points 1 year ago

There are a couple of things:

  • There is the active research into explainable AI (https://www.darpa.mil/program/explainable-artificial-intelligence) that looks at ways to get a better explanation for why an output was given. These techniques generally look at the attention matrices and try to come up with more simple examples for why something was generated. This can work if there is not a source of truth, but allows for more trustworthy human-machine teaming. The XAI program mostly has wound down, I am curious if their techniques would apply to LLMs.
  • Second there is the integration of AI into a system that enforces some type of "guard rails" around the AI. This works best when there is a source of truth. Some examples are using an LLM to write queries for a more factual database and then take the output from that query and reformat it into natural language. This is where the LangChain project comes in. I built a janky prototype for using ChatGPT to optimize Python code, where the candidates from the AI are compared to the original (slow) code using symbolic analysis, and any differences are then fed back to ChatGPT to refactor until an equivalent candidate is generated. Both of these work because there is a source of truth.