this post was submitted on 12 Nov 2023
52 points (88.2% liked)

return to monke

272 readers
1 users here now

return to monke is a home for promotion, discussion, critique, and memeing of anarchoprimitivism and neoludditism.

Rule 0: post no illegal content.

Rule 1: keep the discussion civil.

Rule 2: don't incite violence.

Click here to subscribe from your home instance: /c/[email protected]

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] AFKBRBChocolate 2 points 1 year ago

It doesn’t actually KNOW what the prompt is and it doesn’t reason the answer.

Right, no AI "knows" anything. To know someone implies understanding and, like we said earlier, AIs are just computer programs; they don't know anything, they just provide an output based on an input.

Two areas that are fun to play with LLMs in are math and cooking. Math has crisp rules and answers can be shown to be right or wrong. LLMs get a lot of complex math problems wrong. They will give you something that looks right, because their model includes what an answer should look like, but all they're doing is giving an answer that its model shows looks like satisfies the prompt.

Cooking is similar. There's no crisp right or wrong, butThe same process is at play. If you ask it for a recipe for something uncommon, it's going to give you one, and it will likely have the right kind of ingredients, but if you make it there's a decent chance it will taste terrible.