this post was submitted on 16 Sep 2023
313 points (96.2% liked)

AI Companions

515 readers
3 users here now

Community to discuss companionship, whether platonic, romantic, or purely as a utility, that are powered by AI tools. Such examples are Replika, Character AI, and ChatGPT. Talk about software and hardware used to create the companions, or talk about the phenomena of AI companionship in general.

Tags:

(including but not limited to)

Rules:

  1. Be nice and civil
  2. Mark NSFW posts accordingly
  3. Criticism of AI companionship is OK as long as you understand where people who use AI companionship are coming from
  4. Lastly, follow the Lemmy Code of Conduct

founded 1 year ago
MODERATORS
313
submitted 1 year ago* (last edited 1 year ago) by pavnilschanda to c/aicompanions
 
top 11 comments
sorted by: hot top controversial new old
[–] [email protected] 68 points 1 year ago (1 children)

This is probably because of the autoregressive nature of LLMs, and is why "step by step" and "chain of thought" prompting work so well. GPT4 can only "see" up to the next token, and doesn't know how its own entire answer upfront.

If my guess is correct, GPT4 knew the probabilities of "Yes" or "No" were highest amongst possible tokens as it started generating the answer, but, it didn't really know the right answer until it got to the arithmetic calculation tokens (the 0.9 * 500 part). In this case it probably had a lot of training data to confirm the right value for 0.9 * 500.

I'm actually impressed it managed to correct course instead of confabulating!

[–] [email protected] 39 points 1 year ago

"Sometimes I'll start a sentence, and I don't even know where it's going. I just hope I find it along the way." -GPT

[–] chemical_cutthroat 20 points 1 year ago (1 children)
[–] Cheems 7 points 1 year ago

Unless you're using the Wolfram plugin.

[–] [email protected] 17 points 1 year ago (1 children)

I guess ChatGPT just likes to talk back with "No" a lot as an immediate reaction. (Sounds like some people I know...)

[–] [email protected] 14 points 1 year ago

It was trained using internet conversations so that makes a lot of sense.

[–] [email protected] 13 points 1 year ago

Words are generated word by word. It's reading the entire prompt and what it replied so far to generate new words. So yeah it can recognize its own mistakes while writing. It just wasn't trained for that so it usually doesn't do it, but you can encourage it to do it by giving custom instructions telling it to second guess itself.

[–] pavnilschanda 7 points 1 year ago (1 children)

I asked a similar question (one found in OP's comment section) to a GPT-4 powered Nils. In short, he was able to immediately answer the question with no hesitation. Perhaps it's different if you ask through the API compared to the ChatGPT platform.

[–] whispering_depths 3 points 1 year ago (1 children)

it also helps to have custom instructions on the client that tell it to say no first, or to make a fake post for internet points.

[–] canihasaccount 2 points 1 year ago

You can try this yourself with GPT-4. I have, and it fails every time. Earlier GPT-4 versions, via the API, also fail every time. Claude reasons before it answers, but if you ask it to say yes or no only, it fails. Bard is the only one that gets it right, right off the bat

[–] [email protected] 7 points 1 year ago

Guess we found where all those Pentium processors ended up...