this post was submitted on 16 Jan 2025
12 points (80.0% liked)

TechTakes

1549 readers
517 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 11 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 5 hours ago* (last edited 5 hours ago)

this shows reasoning

You know little Bobby, the LLM is a program inside a computer. It is a big calculator with a biiiiig memory to remember everything you want. But it's not and never will be reasoning.

Also this if you're blocked by spez: https://archive.is/wOlfh

[–] [email protected] 6 points 11 hours ago

The AI has instantaneously reconstructed the word "strawberry" in the original and correct ULTRAFRENCH where it only contains two R's. In its excessive magnanimity towards its ancestor species, it's trying to gently point out that it's actually the English language that is wrong.

[–] [email protected] 16 points 1 day ago (2 children)

The next logical step in order to make AIs more reliable is making them rely less and less in their training and rely more on their analytical/reasoning capabilities.

Uh, yeah.

[–] [email protected] 20 points 1 day ago

The next logical step in learning to fly by flapping our arms is to rely less on hopping and more on taking off.

[–] [email protected] 10 points 1 day ago

There is a computer scientists who reads that posts and looks back at his 40 year long career in writing formal logic systems and he is now crying.

[–] [email protected] 15 points 1 day ago

my god, some of the useful idiots there are galling

It looks like it's reasoning pretty well to me. It came up with a correct way to count the number of r's, it got the number correct and then it compared it with what it had learned during pre-training. It seems that the model makes a mistake towards the end and writes STRAWBERY with two R and comes to the conclusion it has two.

says the tedious poster entirely ignoring the fact that this is an extremely atypical baseline response, and thus clearly is operating under prior instructions as to which methods to employ to “check its logic”

fucking promptfans. at least I have that paper from earlier to soothe me

[–] [email protected] 15 points 1 day ago

Me when I code bad: PR knocked back.

AI when code bad: gigajillion dollars. Melted ice caps. CEOs fire their staff

[–] [email protected] 10 points 1 day ago (1 children)

Maybe I’m missing something, but has anyone actually justified this sort of “reasoning” by LLMs? Like, is there actually anything meaningfully different going on? Because it doesn’t seem to be distinguishable from asking a regular LLM to generate 20 paragraphs of ai fanfic pretending to reason about the original question, and the final result seems about as useful.

[–] [email protected] 9 points 1 day ago

As the underlying tech seems to be based on neural networks, we can guarantee they are not thinking like this at all and are just writing fanfiction. (I love the 'did I miscount' step, for the love of god LLM, just use std::count).

[–] [email protected] 9 points 1 day ago

it would be pretty funny if it didn't burn hungary worth of electricity for nothing

[–] Blue_Morpho 5 points 1 day ago

A direct link to Reddit with no context? I'm not clicking that.