this post was submitted on 29 Jun 2024
121 points (91.2% liked)
ChatGPT
8636 readers
78 users here now
Unofficial ChatGPT community to discuss anything ChatGPT
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
LLMs don’t know if what they’re telling you is true or not, out of the box. It’s gonna give you an answer that statistically looks like a sequence of words that should come in response to the sequence of words (the prompt) you gave it. It doesn’t know what the words you said mean, and it doesn’t know what it answered means either. One of my favorite interactions I’ve had with Claude shows this limitation quite well…
How many r's are there in the word strawberry?
Are you sure?
What's absolutely crazy about that is:
Prompt: write code that checks how many r's are in the word strawberry
Response:
My first thought is that you could write a program that does something like this:
Of course, the biggest problem with this system is that a person could fool it into generating malicious code.
That could work in that specific case, but telling the LLM to write code to answer random questions probably wouldn't work very well in general.