this post was submitted on 15 Oct 2023
1044 points (97.1% liked)

Technology

59708 readers
5537 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Google has plunged the internet into a “spiral of decline”, the co-founder of the company’s artificial intelligence (AI) lab has claimed.

Mustafa Suleyman, the British entrepreneur who co-founded DeepMind, said: “The business model that Google had broke the internet.”

He said search results had become plagued with “clickbait” to keep people “addicted and absorbed on the page as long as possible”.

Information online is “buried at the bottom of a lot of verbiage and guff”, Mr Suleyman argued, so websites can “sell more adverts”, fuelled by Google’s technology.

you are viewing a single comment's thread
view the rest of the comments
[–] Redredme 11 points 1 year ago (3 children)

That's such a strange question. It's almost like you imply that Google results do not need fact checking.

They do. Everything found online does.

[–] [email protected] 12 points 1 year ago (1 children)

With google, it depends on what webpage you end up on. Some require more checking than others, which are more trustworthy

Generative AI can hallucinate about anything

[–] dojan 27 points 1 year ago* (last edited 1 year ago) (1 children)

There are no countries in Africa starting with K.

LLMs aren’t trained to give correct answers, they’re trained to generate human-like text. That’s a significant difference.

[–] Takumidesh 2 points 1 year ago (1 children)

They also aren't valuable for asking direct questions like this.

There value comes in with call and response discussions. Being able to pair program and work through a problem for example. It isn't about it spitting out a working problem, but about it being able to assess a piece of information in a different way than you can, which creates a new analysis of the information.

It's extraordinarily good at finding things you miss in text.

[–] dojan 1 points 1 year ago

Yeah. There's definitely tasks suited to LLMs. I've used it to condense text, write emails, and even project planning because they do give decently good ideas if you prompt them right.

Not sure I'd use them for finding information though, even with the ability to search for it. I'd much rather just search for it myself so I can select the sources, then have the LLM process it.

[–] madnificent 1 points 1 year ago

Agree.

I found it more tempting to accept the initial answers I got from GPT4 (and derivatives) because they are so well written. I know there are more like me.

With the advent of working LLMs, reference manuals should gain importance too. I check them more often than before because LLMs have forced me to. Could be very positive.