submitted 2 weeks ago by jeffw to c/technology
top 50 comments
sorted by: hot top controversial new old
[-] [email protected] 218 points 2 weeks ago

I remember seeing a comment on here that said something along the lines of “for every dangerous or wrong response that goes public there’s probably 5, 10 or even 100 of those responses that only one person saw and may have treated as fact”

[-] [email protected] 49 points 2 weeks ago

Fuck. I'm stealing this comment - it's brilliant.

[-] [email protected] 58 points 2 weeks ago
[-] [email protected] 63 points 2 weeks ago
[-] [email protected] 48 points 2 weeks ago

What a nice original comic you made

[-] davidgro 29 points 2 weeks ago

... I've never seen that attributed before. Wow.

load more comments (2 replies)
load more comments (1 replies)
load more comments (1 replies)
[-] Nobody 139 points 2 weeks ago

Tech company creates best search engine —-> world domination —> becomes VC company in tech trench coat —-> destroy search engine to prop up bad investments in ~~artificial intelligence~~ advanced chatbots

[-] stellargmite 62 points 2 weeks ago

Then Hire cheap human intelligence to correct the AIs hallucinatory trash, trained from actual human generated content in the first place which the original intended audience did understand the nuanced context and meaning of in the first place. Wow more like theyve shovelled a bucket of horse manure on the pizza as well as the glue. Added value to the advertisers. AI my arse. I think calling these things language models is being generous. More like energy and data hungry vomitrons.

[-] WhatAmLemmy 21 points 2 weeks ago* (last edited 2 weeks ago)

Calling these things Artificial Intelligence should be a crime. It's false advertising! Intelligence requires critical thought. They possess zero critical thought. They're stochastic parrots, whose only skill is mimicking human language, and they can only mimic convincingly when fed billions of examples.

load more comments (3 replies)
[-] cm0002 17 points 2 weeks ago

You either die a hero or live long enough to become the villain

load more comments (1 replies)
[-] iAvicenna 137 points 2 weeks ago

"Many of the examples we’ve seen have been uncommon queries,"

Ah the good old "the problem is with the user not with our code" argument. The sign of a truly successful software maker.

[-] voluble 51 points 2 weeks ago

"We don't understand. Why aren't people simply searching for Taylor Swift"

load more comments (1 replies)
load more comments (6 replies)
[-] [email protected] 84 points 2 weeks ago* (last edited 2 weeks ago)

The reason why Google is doing this is simply PR. It is not to improve its service.

The underlying tech is likely Gemini, a large language model (LLM). LLMs handle chunks of words, not what those words convey; so they have no way to tell accurate info apart from inaccurate info, jokes, "technical truths" etc. As a result their output is often garbage.

You might manually prevent the LLM from outputting a certain piece of garbage, perhaps a thousand. But in the big picture it won't matter, because it's outputting a million different pieces of garbage, it's like trying to empty the ocean with a small bucket.

I'm not making the above up, look at the article - it's basically what Gary Marcus is saying, under different words.

And I'm almost certain that the decision makers at Google know this. However they want to compete with other tendrils of the GAFAM cancer for a turf called "generative models" (that includes tech like LLMs). And if their search gets wrecked in the process, who cares? That turf is safe anyway, as long as you can keep it up with enough PR.

Google continues to say that its AI Overview product largely outputs “high quality information” to users.

There's a three letters word that accurately describes what Google said here: lie.

[-] [email protected] 22 points 2 weeks ago

At some point no amount of PR will hide the fact search has become useless. They know this but they're getting desperate and will try anything.

I'm waiting for Yahoo to revive their link directory or for Mozilla to revive DMOZ. That will be the sign that shit level is officially chin-height.

[-] [email protected] 80 points 2 weeks ago
load more comments (3 replies)
[-] SlopppyEngineer 74 points 2 weeks ago

Correcting over a decade of Reddit shitposting in what, a few weeks? They're pretty ambitious.

[-] atrielienz 26 points 2 weeks ago

This is perhaps the most ironic thing about the whole reddit data scraping thing and Spez selling out the user data of reddit to LLM'S. Like. We spent so much time posting nonsense. And then a bunch of people became mods to course correct subreddits where that nonsense could be potentially fatal. And then they got rid of those mods because they protested. And now it's bots on bots on bots posting nonsense. And they want their LLM'S trained on that nonsense because reasons.

load more comments (1 replies)
load more comments (1 replies)
[-] [email protected] 72 points 2 weeks ago

Now, instead of debugging the code, you have to debug the data. Sounds worse.

load more comments (1 replies)
[-] [email protected] 64 points 2 weeks ago

Good, remove all the weird reddit answers, leaving only the "14 year old neo-nazi" reddit answers, "cop pretending to be a leftist" reddit answers, and "39 year old pedophile" reddit answers. This should fix the problem and restore google back to its defaults

load more comments (1 replies)
[-] maxenmajs 60 points 2 weeks ago

Isn't the model fundamentally flawed if it can't appropriately present arbitrary results? It is operating at a scale where human workers cannot catch every concerning result before users see them.

The ethical thing to do would be to discontinue this failed experiment. The way it presents results is demonstrably unsafe. It will continue to present satire and shitposts as suggested actions.

load more comments (1 replies)
[-] [email protected] 59 points 2 weeks ago* (last edited 2 weeks ago)

kinda reads like 'Weird Al' answers.. like, yankovic seems like a nice guy and i like his music, but how many answers could he have?

[-] [email protected] 19 points 2 weeks ago

Wish we would stop using fonts don't think make a clear differences between I and l.

load more comments (5 replies)
[-] [email protected] 52 points 2 weeks ago

Don't worry, they'll insert it all into captchas and make us label all their data soon.

[-] Wappen 41 points 2 weeks ago

"Select the URL that answers the question most appropriately"

load more comments (2 replies)
[-] [email protected] 51 points 2 weeks ago

This thing is way too half baked to be in production. A day or two ago somebody asked Google how to deal with depression and the stupid AI recommended they jump off the Golden Gate Bridge because apparently some redditor had said that at some point. The answers are so hilariously wrong as to go beyond funny and into dangerous.

load more comments (3 replies)
[-] [email protected] 46 points 2 weeks ago

I was at first wondering what google had done to piss off Weird Al. He seems so chill.

[-] [email protected] 15 points 2 weeks ago

First Madonna kills Weird Al, and now Google.

load more comments (3 replies)
load more comments (5 replies)
[-] flop_leash_973 42 points 2 weeks ago

If you have to constantly manually intervene in what your automated solutions are doing, then it is probably not doing a very good job and it might be a good idea to go back to the drawing board.

load more comments (1 replies)
[-] [email protected] 38 points 2 weeks ago

good luck with that.

One of the problems with a giant platform like that is that billions of people are always using it.

Keep poisoning the AI. It's working.

[-] RizzRustbolt 37 points 2 weeks ago

The thing is... google is the one that poisoned it.

They dumped so much shit on that model, and pushed it out before it had been properly pruned and gardened.

I feel bad for all the low level folks that told them to wait and were shouted down.

[-] [email protected] 16 points 2 weeks ago

a lot of shit at corporations works like that.

The worst of it happens in the video game industry. Microtransactions and invasive monetization? Started in the video game industry. Locking pre-installed features behind a paywall? Started in the video game industry. Releasing shit before it's ready to run as intended? Started in the video game industry.

load more comments (1 replies)
load more comments (3 replies)
load more comments (3 replies)
[-] [email protected] 35 points 2 weeks ago

They are going back a century, to the manual teleoperator. When all started, it's the circle of tech

[-] wildcardology 35 points 2 weeks ago

Here's an idea google, why not set it back like it was 10-15 years ago

load more comments (5 replies)
[-] JdW 34 points 2 weeks ago

If only there was a way to show the whole world in one simple example how Enshitification works.

Google execs: Hold my beer!

[-] [email protected] 26 points 2 weeks ago

[...] a lot of AI companies are “selling dreams” that this tech will go from 80 percent correct to 100 percent.

In fact, Marcus thinks that last 20 percent might be the hardest thing of all.

Yeah, it's well known, e.g. people say "the last 20% takes 80% of the effort". All the most tedious and difficult stuff gets postponed to the end, which is why so many side projects never get completed.

[-] scrion 15 points 2 weeks ago

It's not just the difficult stuff, but often the mundane, e. g. stability, user friendliness, polish, scalability etc. that takes something from working in a constrained environment to an actual product - it's a chore to work on and a lot less "sexy", with never enough resources allocated to it: We have done all the difficult stuff already, how much more work can this be?

Turns out, a fucking lot.

load more comments (1 replies)
load more comments (1 replies)
[-] [email protected] 24 points 2 weeks ago

Isn’t that like trying to get pee out of a pool?

[-] gedaliyah 20 points 2 weeks ago

At this point, it seems like google is just a platform to message a google employee to go google it for you.

load more comments (3 replies)
[-] frostmore 20 points 2 weeks ago

allowing reddit to train Google's AI was a mistake to begin with. i mean just look at reddit and the shitlord that is spez.

there are better sources and reddit is not one of them.

[-] Sam_Bass 18 points 2 weeks ago

Id be tickled to have odd answers by mr. yankovic mself

load more comments (4 replies)
[-] trollbearpig 18 points 2 weeks ago* (last edited 2 weeks ago)

I looove how the people at Google are so dumb that they forgot that anything resembling real intelligence in ChatGPT is just cheap labor in Africa (Kenya if I remember correctly) picking good training data. So OpenAI, using an army of smart humans and lots of data built a computer program that sometimes looks smart hahaha.

But the dumbasses in Google really drank the cool aid hahaha. They really believed that LLMs are magically smart so they feed it reddit garbage unfiltered hahahaha. Just from a PR perspective it must be a nigthmare for them, I really can't understand what they were thinking here hahaha, is so pathetically dumb. Just goes to show that money can't buy intelligence I guess.

load more comments (2 replies)
[-] uebquauntbez 16 points 2 weeks ago

'I'm sorry, Google, I'm afraid, I cant do that!'

[-] [email protected] 16 points 2 weeks ago

It's evolving, just backwards.

load more comments
view more: next ›
this post was submitted on 26 May 2024
737 points (98.4% liked)


55114 readers
7511 users here now

This is a most excellent place for technology news and articles.

Our Rules

  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots

founded 1 year ago