this post was submitted on 28 Jun 2023
28 points (100.0% liked)
Showerthoughts
30367 readers
529 users here now
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted, clever little truths, hidden in daily life.
Here are some examples to inspire your own showerthoughts: 1
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- No politics
- If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
- A good place for politics is c/politicaldiscussion
- If you feel strongly that you want politics back, please volunteer as a mod.
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct
If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.
Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report the message goes away and you never worry about it.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don’t know if your source is using ChatGPT or GPT-4, but the podcast I linked above is about AI researchers who knew very well about ChatGPT’s limitations and therefore were of the strong opinion that LLMs couldn’t reason, and then GPT-4 came out which changed their entire thinking about LLMs. They tried to come up with experiments that could reliably identify that some reasoning happened or not, and GPT-4 didn’t pass all of them, but it passed a bunch more than ChatGPT could, and a bunch more than it should. And they have no idea how exactly it would succeed at those, if the thing really can’t reason.
You’re correct that LLMs only guess the next best word, over and over. Which makes it even more mysterious why some LLMs pass those tests you just mentioned. They really shouldn’t, theoretically.
Unless… (and that’s my point) we also shouldn’t theoretically, but there is something about neural designs and bullshitting things one word at a time that makes those specific reasonings work, and that would explain why both LLMs and humans succeed at some of them.