this post was submitted on 29 Jun 2024
121 points (91.2% liked)

ChatGPT

8636 readers
70 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS
 

Small rant : Basically, the title. Instead of answering every question, if it instead said it doesn't know the answer, it would have been trustworthy.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 61 points 2 days ago (5 children)

LLMs don’t know if what they’re telling you is true or not, out of the box. It’s gonna give you an answer that statistically looks like a sequence of words that should come in response to the sequence of words (the prompt) you gave it. It doesn’t know what the words you said mean, and it doesn’t know what it answered means either. One of my favorite interactions I’ve had with Claude shows this limitation quite well…

How many r's are there in the word strawberry?

There are 2 r's in the word "strawberry".

Are you sure?

You're right to ask for confirmation. I apologize for my mistake in the previous response. Let me correct that:

There is actually only 1 'r' in the word "strawberry".

Thank you for prompting me to double-check. It's important to provide accurate information.

[–] AceBonobo 21 points 2 days ago* (last edited 2 days ago) (2 children)

What's absolutely crazy about that is:

Prompt: write code that checks how many r's are in the word strawberry

Response:

using System;

class Program
{
    static void Main()
    {
        string word = "strawberry";
        char targetChar = 'r';
        int count = 0;

        foreach (char c in word)
        {
            if (c == targetChar)
                count++;
        }

        Console.WriteLine($"The letter '{targetChar}' appears {count} times in the word \"{word}\".");
    }
}
[–] drislands 8 points 2 days ago (1 children)

My first thought is that you could write a program that does something like this:

  • Receive prompt "how many times does R appear in the word 'strawberry'"
  • Run the prompt through LLM saying "reword the following into a command to generate code that will answer the question" or something like that
  • Run the results through LLM
  • Compile and run the results
  • Provide the output to the user

Of course, the biggest problem with this system is that a person could fool it into generating malicious code.

[–] [email protected] 3 points 2 days ago

That could work in that specific case, but telling the LLM to write code to answer random questions probably wouldn't work very well in general.

[–] [email protected] 1 points 1 day ago

The code does look like code that counts Rs. The training data probably included tons of code that “counts character X in string Y”, so ChatGPT “knows” what code that counts characters in a string looks like. It similarly “knows” what a string looks like in the language, and what an application entry point looks like, etc. I’m not so familiar with C# that I’d know if it compiles or not. ChatGPT doesn’t either, but it has the advantage of having seen a whole freaking lot of C# code before.

load more comments (2 replies)