this post was submitted on 20 Jan 2024
631 points (98.9% liked)

196

16597 readers
3319 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 10 months ago* (last edited 10 months ago)

No. LLMs have context and know that words have context. This would be the exact opposite of ”AI”. This is analogous to defining a global variable “hot” as 1.9m kelvin, and then blindly using that for hot everywhere the word hot is used.

AI, even current iterations, know that a hot stove will be hotter than hot tea. And they’re both less than the hot that is the surface of the sun.

The whole achievement of LLMs is that they learn all of that context - to guess with certainty of some percentage that when you’re talking about hot while talking about tea that you mean 160-180 degrees or whatever, and when talking about hot oil it might be 350 degrees if you’re frying, or 250 degrees if you’re talking about cars. And if you’re talking about people, hot means attractive.

That’s exactly what LLMs do today. Not 100% perfectly, there are errors and hallucinations and whatever else, but that’s the exception not the norm.