this post was submitted on 27 May 2024
1102 points (98.0% liked)

Technology

59088 readers
4720 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

top 50 comments
sorted by: hot top controversial new old
[–] givesomefucks 338 points 5 months ago (30 children)

They keep saying it's impossible, when the truth is it's just expensive.

That's why they wont do it.

You could only train AI with good sources (scientific literature, not social media) and then pay experts to talk with the AI for long periods of time, giving feedback directly to the AI.

Essentially, if you want a smart AI you need to send it to college, not drop it off at the mall unsupervised for 22 years and hope for the best when you pick it back up.

[–] [email protected] 154 points 5 months ago (5 children)

No he's right that it's unsolved. Humans aren't great at reliably knowing truth from fiction too. If you've ever been in a highly active comment section you'll notice certain "hallucinations" developing, usually because someone came along and sounded confident and everyone just believed them.

We don't even know how to get full people to do this, so how does a fancy markov chain do it? It can't. I don't think you solve this problem without AGI, and that's something AI evangelists don't want to think about because then the conversation changes significantly. They're in this for the hype bubble, not the ethical implications.

[–] dustyData 75 points 5 months ago (3 children)

We do know. It's called critical thinking education. This is why we send people to college. Of course there are highly educated morons, but we are edging bets. This is why the dismantling or coopting of education is the first thing every single authoritarian does. It makes it easier to manipulate masses.

[–] [email protected] 58 points 5 months ago (8 children)

"Edging bets" sounds like a fun game, but I think you mean "hedging bets", in which case you're admitting we can't actually do this reliably with people.

And we certainly can't do that with an LLM, which doesn't actually think.

load more comments (8 replies)
load more comments (2 replies)
load more comments (4 replies)
[–] [email protected] 54 points 5 months ago (3 children)

I let you in on a secret: scientific literature has its fair share of bullshit too. The issue is, it is much harder to figure out its bullshit. Unless its the most blatant horseshit you've scientifically ever seen. So while it absolutely makes sense to say, let's just train these on good sources, there is no source that is just that. Of course it is still better to do it like that than as they do it now.

[–] givesomefucks 34 points 5 months ago (8 children)

The issue is, it is much harder to figure out its bullshit.

Google AI suggested you put glue on your pizza because a troll said it on Reddit once...

Not all scientific literature is perfect. Which is one of the many factors that will stay make my plan expensive and time consuming.

You can't throw a toddler in a library and expect them to come out knowing everything in all the books.

AI needs that guided teaching too.

load more comments (8 replies)
load more comments (2 replies)
[–] Zarxrax 45 points 5 months ago (1 children)

I'm addition to the other comment, I'll add that just because you train the AI on good and correct sources of information, it still doesn't necessarily mean that it will give you a correct answer all the time. It's more likely, but not ensured.

load more comments (1 replies)
[–] [email protected] 31 points 5 months ago (7 children)

it's just expensive

I'm a mathematician who's been following this stuff for about a decade or more. It's not just expensive. Generative neural networks cannot reliably evaluate truth values; it will take time to research how to improve AI in this respect. This is a known limitation of the technology. Closely controlling the training data would certainly make the information more accurate, but that won't stop it from hallucinating.

The real answer is that they shouldn't be trying to answer questions using an LLM, especially because they had a decent algorithm already.

load more comments (7 replies)
load more comments (26 replies)
[–] [email protected] 222 points 5 months ago (3 children)

In the interest of transparency, I don't know if this guy is telling the truth, but it feels very plausible.

[–] [email protected] 126 points 5 months ago (11 children)

It seems like the entire industry is in pure panic about AI, not just Google. Everyone hopes that LLMs will end years of homeopathic growth through iteration of long-existing technology, which is why it attracts tons of venture capital.

Google, which sits where IBM was decades ago, is too big, too corporate and too slow now, so they needed years to react to this fad. When they finally did, all they were able to come up with was a rushed equivalent of existing LLMs that suffers from all of the same problems.

[–] [email protected] 59 points 5 months ago (1 children)

They all hope it'll end years of having to pay employees.

load more comments (1 replies)
[–] NutWrench 53 points 5 months ago (2 children)

I think this is what happens to every company once all the smart / creative people have gone. All you have left are the "line must always go up" business idiots who don't understand what their company does or know how to make it work.

load more comments (2 replies)
load more comments (9 replies)
load more comments (2 replies)
[–] Hubi 144 points 5 months ago (4 children)

The solution to the problem is to just pull the plug on the AI search bullshit until it is actually helpful.

[–] [email protected] 46 points 5 months ago (2 children)

Absolutely this. Microsoft is going headlong into the AI abyss. Google should be the company that calls it out and says "No, we value the correctness of our search results too much".

It would obviously be a bullshit statement at this point after a decade of adverts corrupting their value, but that's what they should be about.

load more comments (2 replies)
load more comments (3 replies)
[–] [email protected] 132 points 5 months ago (1 children)

Good. Nothing will get us through the hype cycle faster than obvious public failure. Then we can get on with productive uses.

[–] Tier1BuildABear 44 points 5 months ago (16 children)

I don't like the sound of getting on with "productive uses" either though. I hope the entire thing is a catastrophic failure.

load more comments (16 replies)
[–] Resol 94 points 5 months ago (4 children)

If you can't fix it, then get rid of it, and don't bring it back until we reach a time when it's good enough to not cause egregious problems (which is never, so basically don't ever think about using your silly Gemini thing in your products ever again)

load more comments (4 replies)
[–] masquenox 83 points 5 months ago (15 children)

Since when has feeding us misinformation been a problem for capitalist parasites like Pichai?

Misinformation is literally the first line of defense for them.

[–] Badeendje 34 points 5 months ago (8 children)

But this is not misinformation, it is uncontrolled nonsense. It directly devalues their offering of being able to provide you with an accurate answer to something you look for. And if their overall offering becomes less valuable, so does their ability to steer you using their results.

So while the incorrect nature is not a problem in itself for them, (as you see from his answer)… the degradation of their ability to influence results is.

load more comments (8 replies)
load more comments (14 replies)
[–] [email protected] 78 points 5 months ago (5 children)

Here's a solution: don't make AI provide the results. Let humans answer each other's questions like in the good old days.

[–] [email protected] 36 points 5 months ago (4 children)

Whatever happened to Jeeves? He seemed like a good guy. He probably burned out.

load more comments (4 replies)
load more comments (4 replies)
[–] [email protected] 76 points 5 months ago (1 children)

Has No Solution for Its AI Providing Wildly Incorrect Information

Don't use it??????

AI has no means to check the heaps of garbage data is has been fed against reality, so even if someone were to somehow code one to be capable of deep, complex epistemological analysis (at which point it would already be something far different from what the media currently calls AI), as long as there's enough flat out wrong stuff in its data there's a growing chance of it screwing it up.

load more comments (1 replies)
[–] [email protected] 74 points 5 months ago (3 children)

Wow, in the 2000's and 2010's google my impression was that this is an amazing company where brilliant people work to solve big problems to make the world a better place. In the last 10 years, all I was hoping for was that they would just stop making their products (search, YouTube) worse.

Now they just blindly riding the AI hype train, because "everyone else is doing AI".

load more comments (3 replies)
[–] kwebb990 69 points 5 months ago (3 children)

and our parents told us Wikipedia couldn't be trusted....

load more comments (3 replies)
[–] [email protected] 66 points 5 months ago (1 children)

Replace the CEO with an AI. They're both good at lying and telling people what they want to hear, until they get caught

load more comments (1 replies)
[–] xantoxis 65 points 5 months ago (2 children)

"It's broken in horrible, dangerous ways, and we're gonna keep doing it. Fuck you."

load more comments (2 replies)
[–] joe_archer 64 points 5 months ago (3 children)

It is probably the most telling demonstration of the terrible state of our current society, that one of the largest corporations on earth, which got where it is today by providing accurate information, is now happy to knowingly provide incorrect, and even dangerous information, in its own name, an not give a flying fuck about it.

load more comments (3 replies)
[–] [email protected] 61 points 5 months ago (4 children)

The best part of all of this is that now Pichai is going to really feel the heat of all of his layoffs and other anti-worker policies. Google was once a respected company and place where people wanted to work. Now they're just some generic employer with no real lure to bring people in. It worked fine when all he had to do was increase the prices on all their current offerings and stuff more ads, but when it comes to actual product development, they are hopelessly adrift that it's pretty hilarious watching them flail.

You can really see that consulting background of his doing its work. It's actually kinda poetic because now he'll get a chance to see what actually happens to companies that do business with McKinsey.

load more comments (4 replies)
[–] badbytes 59 points 5 months ago (2 children)

Step 1. Replace CEO with AI. Step 2. Ask New AI CEO, how to fix. Step 3. Blindly enact and reinforce steps

load more comments (2 replies)
[–] PumpkinEscobar 53 points 5 months ago (7 children)

Rip up the Reddit contract and don’t use that data to train the model. It’s the definition of a garbage in garbage out problem.

load more comments (7 replies)
[–] [email protected] 51 points 5 months ago (8 children)

these hallucinations are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem”.

Then what made you think it’s a good idea to include that in your product now?!

load more comments (8 replies)
[–] mrfriki 49 points 5 months ago* (last edited 5 months ago) (3 children)

So if a car maker releases a car model that randomly turns abruptly to the left for no apparent reason, you simply say "I can't fix it, deal with it"? No, you pull it out of the market, try to fix it and, if this it is not possible, then you retire the model before it kills anyone.

load more comments (3 replies)
[–] [email protected] 49 points 5 months ago (2 children)

If you train your AI to sound right, your AI will excel at sounding right. The primary goal of LLMs is to sound right, not to be correct.

load more comments (2 replies)
[–] [email protected] 47 points 5 months ago (8 children)

Media needs to stop calling this AI. There is no intelligence here.

The content generator models know how to put probabilistic tokens together. It has no ability to reason.

It is a currently unsolvable problem to evaluate text to determine if it's factual..until we have artificial general intelligence.

AI will not be able to act like real AI until we solve real AI. That is the currently open problem.

load more comments (8 replies)
[–] Fedditor385 43 points 5 months ago (2 children)

This is so wild to me... as a software engineer, if my software doesn't work 100% of the time as requested in the specification, it fails tests, doesn't get released and I get told to fix all issues before going live.

AI is basically another word for unrealiable software full of bugs.

load more comments (2 replies)
[–] [email protected] 40 points 5 months ago (3 children)

Have they tried not using it? 🤦

load more comments (3 replies)
[–] SomeGuy69 39 points 5 months ago (1 children)

I mean they could disable it until it works, else it's knowingly misleading people

[–] go_go_gadget 33 points 5 months ago

Obviously you don't have a business degree.

[–] [email protected] 38 points 5 months ago (5 children)

How about stop forcing it on us?

load more comments (5 replies)
[–] Tygr 38 points 5 months ago

Google CEO essentially says the first result should not be trusted.

[–] [email protected] 37 points 5 months ago

Maybe if you can't get it to be accurate you shouldn't be trying to insert it into everything.

[–] johannesvanderwhales 37 points 5 months ago (47 children)

TBH this is surprisingly honest.

load more comments (47 replies)
[–] Toneswirly 36 points 5 months ago

The answer is dont inflate your stock price by cramming the latest tech du jour in to your flagship product... but we all know thats not an option.

[–] retrospectology 35 points 5 months ago* (last edited 5 months ago) (3 children)

This is what happens every time society goes along with tech bro hype. They just run directly into a wall. They are the embodiment of "Didn't stop to think if they should" and it's going to cause a lot of problems for humanity.

load more comments (3 replies)
[–] [email protected] 34 points 5 months ago (10 children)

I think we should stop calling things AI unless they actually have their own intelligence independent of human knowledge and training.

load more comments (10 replies)
[–] CrowAirbrush 30 points 5 months ago (6 children)

I have a solution: stop using their search engine to begin with and slowly replace everything else google you use.

load more comments (6 replies)
[–] BrokenGlepnir 29 points 5 months ago (1 children)

There is apparently no limit to calling a bug a feature

load more comments (1 replies)
[–] RizzRustbolt 28 points 5 months ago (1 children)

The model literally ate The Onion, and now they can't get it to throw it back up.

load more comments (1 replies)
[–] [email protected] 28 points 5 months ago

I know an easy fix. Just don't do ai.

[–] [email protected] 27 points 5 months ago* (last edited 5 months ago) (1 children)

They polluted their model with the sewage of the Internet.

The only worse thing they could have done is base their entire LLM dataset on 4chan.

load more comments (1 replies)
load more comments
view more: next ›