this post was submitted on 10 Apr 2024
37 points (95.1% liked)

TechTakes

1435 readers
144 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
top 39 comments
sorted by: hot top controversial new old
[–] [email protected] 50 points 7 months ago (1 children)

Laurens Hof on Mastodon:

absolutely insane article:

  • the headline claims the models capable of reasoning are ready
  • first paragraph moves from 'ready' to 'on the brink'
  • 4th paragraph moves from 'on the brink' to 'hard at work, figuring it out'
  • 5th paragraph scales it down further, now the next model with only 'show progress' towards reasoning'
  • halfway through LeCun admits that current models cannot reason at all

the journalistic malpractice here is honestly a parody of itself

[–] [email protected] 8 points 7 months ago (1 children)

Which account? Masto.soc search shows 3 Laurens Hofs...

[–] [email protected] 25 points 7 months ago (5 children)

Speaking at an event in London on Tuesday, Meta’s chief AI scientist Yann LeCun said that current AI systems “produce one word after the other really without thinking and planning”.

Because they struggle to deal with complex questions or retain information for a long period, they still “make stupid mistakes”, he said.

Adding reasoning would mean that an AI model “searches over possible answers”, “plans the sequence of actions” and builds a “mental model of what the effect of [its] actions are going to be”, he said.

wait, you mean the same models that supposed AI researchers were swearing had “glimmerings of intelligent reasoning” and “a complex world model” really were just outputting the most likely next word for a prompt? the current models are just fancy autocomplete but now that there’s a new product to sell, that one will be the real thing? and of course, the new models are getting pre-announced as revolutionary as interest in this horseshit in general takes a nosedive.

LeCun said it was working on AI “agents” that could, for instance, plan and book each step of a journey, from someone’s office in Paris to another in New York, including getting to the airport.

these must be the multi-agent models that AI fans won’t shut the fuck up about now that multi-modal LLMs are here and disappointing. is it just me or does the use case for this sound fucking stupid? like, there’s apps that do this already. this shit was solved already by application of the least-terrible surviving algorithms from the first AI boom. what the fuck is the point of re-solving travel planning, but now incredibly expensive and you can’t trust the results?

[–] [email protected] 19 points 7 months ago

So, if anyone is keeping score, the promise of Artificial Intelligence has descended from "the computers on Star Trek" to "spicy ticket-booking".

[–] [email protected] 18 points 7 months ago (1 children)

Ah yes, "getting to the airport", one of the great unsolved challenges in computing.

[–] [email protected] 14 points 7 months ago

in order to solve the Traveling Salesman Problem, the first step is to use a machine model to confirm the user isn’t a salesman

[–] [email protected] 17 points 7 months ago* (last edited 7 months ago) (2 children)

the thing that bothers me about that lecunn statement is that it's another of those not-even-wrong fuckers with an implicit assumption: that the problem is not that it doesn't have intelligence, just that the intelligence isn't very advanced yet - "oh yeah it just didn't think ahead! that's why foot in mouth! it's like your drunk friend at a party!"

which, y'know, is not the case. but they all fucking speak with that implicit foundation, as though the intelligence is proven fact instead of total suggestion (I wanted to say "conjecture", but that isn't the right word either)

these must be the multi-agent models that AI fans won’t shut the fuck up about now that multi-modal LLMs are here and disappointing. is it just me or does the use case for this sound fucking stupid?

it's also the pitch I keep seeing from a number of places, including that rabbit or whatever the fuck thing? and, frankly, can we not? these goddamn things can barely parse sentences and keep context, and someone wants to tell me that a model use is for it to plan my travel? with visas and flight times and transfers? nevermind all the extra implications of accounting for real-world issues (e.g. political sensitivity), preferences in sight-seeing, data privacy considerations (visiting friends)....

like it's just a gigantic fucking katamari ball of nope

[–] [email protected] 13 points 7 months ago

someone wants to tell me that a model use is for it to plan my travel?

I don't think any of these people have ever traveled. Honestly, I used to work for a company where the corporate travel people mostly lived in a small village in Germany, and their recommendations could be insane sometimes, but at least they knew what being a human was like.

[–] [email protected] 6 points 7 months ago* (last edited 7 months ago) (1 children)

bro, bro. i'm not going to answer your question about the obvious and glaring problems, but here read these three preprints that are very exciting about the possibilities!!! no i can't just explain in my own words what they say. but if you cannot refute the mathematics (you can tell it's real maths because it's got squiggly symbols in it) then you must acquit

[–] [email protected] 4 points 7 months ago

if you cannot refute, you must not compute

[–] [email protected] 13 points 7 months ago* (last edited 7 months ago)

This is yet another example of people calling the shots here being completely detached from reality of an average person and bereft of imagination.

Surely the plebs would all want to have an underpaid secretary that plans your private jet trips for you so that you don't have to interact with anyone. It's the dream! I can't imagine a life without that, surely they need it too!

[–] [email protected] 22 points 7 months ago (2 children)

“We will be talking to these AI assistants all the time,” LeCun said. “Our entire digital diet will be mediated by AI systems.”

did u kno that there still exist people who take anything that Yann LeCun says seriously

[–] [email protected] 9 points 7 months ago* (last edited 7 months ago)

~~Big Yann~~ Mid Yann

[–] [email protected] 2 points 7 months ago (1 children)

Does he have a history of over promising and under delivering?

[–] [email protected] 13 points 7 months ago

that would be a key part of his job description, yes

[–] [email protected] 15 points 7 months ago (2 children)

Brad Lightcap, OpenAI’s chief operating officer

I'm sorry your COO has pr0n/Futurama mashup name, OpenAI

[–] [email protected] 13 points 7 months ago (1 children)

Sounds like something autocomplete would make up. Are we sure that is a real person this time?

[–] [email protected] 11 points 7 months ago

the AI that was rejected for the job of basilisk

[–] [email protected] 10 points 7 months ago (1 children)

Brad Lightcap, the decidedly less successful brother of Buzz Lightyear.

[–] [email protected] 7 points 7 months ago

Best famous for their work as VR set headmodel

[–] [email protected] 13 points 7 months ago (1 children)

Genuine People Personalities? Sounds ghastly.

It is.

[–] [email protected] 13 points 7 months ago

Ow god, I let my NiceGrandFatherAIBot on facebook, it now called me a sheeple, a cuck and a pedophile for telling him Biden is the president.