this post was submitted on 21 Oct 2024
348 points (96.8% liked)

Technology

59147 readers
2293 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 23 comments
sorted by: hot top controversial new old
[–] [email protected] 84 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

I love that someone even bothered with a study.

(Edit: To be clear, I am both amused, and also genuinely appreciate that the science is being done.)

[–] jeansburger 41 points 2 weeks ago (4 children)

Confirmation of anecdotes or gut feelings is still science. At some point you need data rather than experience to help people and organizations change their perception (see: most big tech companies lighting billions of dollars on fire on generative AI).

[–] [email protected] 8 points 2 weeks ago

Not to mention based on the numbers in the article I imagine the AI might actually do better than an average human would do. It wasn't as much of a "duh" as I thought it would be.

[–] [email protected] 3 points 2 weeks ago

Agreed!

I don't mean sarcasticly, honestly. As you said, it's still valuable science.

[–] homesweethomeMrL 1 points 2 weeks ago

That’s true. But still. Duh.

[–] [email protected] 1 points 2 weeks ago* (last edited 2 weeks ago)

You also need that stuff to shut up pseudo-sceptics. Like, random example, posture having an influence on mood, there were actually psychologists denying that, reason for that kind of attitude is usually either a) If there's no study on some effect then it doesn't exist, "literature realism" or b) some now-debunked theory of the past implied it, "incorrectness by association". Just because you're an atheist doesn't mean that you should discount catholic opinions on beer brewing, they produce some good shit. And just because the alchemists talked about transmutation and the chemists made fun of it to distance themselves from their own history doesn't mean that some nuclear physicist wasn't about to rain on their parade, yes, you can turn lead into gold.

[–] TheGrandNagus 6 points 2 weeks ago (1 children)

For many hundreds of years, blood-letting was an obvious thing to do. As was just giving people leeches for medical ailments. And ingesting mercury. We thought having sex with virgins would cure STDs. We thought doses of radiation was good for us. And tobacco. We thought it was obvious that the sun revolved around Earth.

It is enormously important to scientifically confirm things, even if they do seem obvious.

[–] [email protected] 0 points 2 weeks ago (1 children)

Uhh, we didn't "think" a lot of those things- you're describing marketing that some Company disseminated in order to shill their products. And many, many people paid the price in misery, or worse.

[–] TheGrandNagus 3 points 2 weeks ago

We absolutely did think those things.

[–] [email protected] 50 points 2 weeks ago (1 children)

Of course not, the whole point of disinformation is that it sounds correct, that’s AI’s bread and butter!

[–] semperverus 2 points 2 weeks ago

I dont think that the developers who came up with the processes for LLMs were really targeting that usecase. It just so happens that the limitations of LLMs also lend power to disinformatiom. LLMs don't really know how to say "I don't know".

[–] [email protected] 40 points 2 weeks ago (1 children)

Think about it this way: remember those upside-down answer keys in the back of your grade school math textbook? Now imagine if those answer keys included just as many incorrect answers as correct ones. How would you know if you were right or wrong without asking your teacher? Until a LLM can guarantee a right answer, and back it up with real citations, it will continue to do more harm than good.

[–] tehn00bi 1 points 2 weeks ago

Certainly plausible. I’m sure they are trying to figure out how to get it to understand relationships between information now that it’s pretty good at statistical inference.

[–] Zerlyna 12 points 2 weeks ago (1 children)

Supposedly ChatGPT had an update in September, but it doesn’t agree that Trump was found guilty in may 34 times. When I give it sources it says ok, but it doesn’t upload correct learned information.

[–] [email protected] 18 points 2 weeks ago (3 children)

That's because it doesn't learn, it's a snapshot of its training data frozen in time.

I like Perplexity (a lot) because instead of using its data to answer your question, it uses your data to craft web searches, gather content, and summarise it into a response. It's like a student that uses their knowledge to look for the answer in the books, instead of trying to answer from memory whether they know the answer or not.

It is not perfect, it does hallucinate from time to time, but it's rare enough that I use it way more than regular web searches at this point. I can throw quite obscure questions at it and it will dig the answer for me.

As someone with ADHD with a somewhat compulsive need to understand random facts (e.g. "I need to know right now how the motor speed in a coffee grinder affects the taste of the coffee") this is an absolute godsend.

I'm not affiliated or anything, and if anything better comes my way I'll be happy to ditch it. But for now I really enjoy it.

[–] [email protected] 2 points 2 weeks ago (1 children)

... t uses your data to craft web searches, gather content, and summarise it into a response.

GPT 4-o does this, too.

[–] [email protected] 1 points 2 weeks ago

Then that might not be the model the previous poster is talking about, because I have to press perplexity really hard to get it to hallucinate. Search-augmented LLMs are pretty neat.

[–] Zerlyna 1 points 2 weeks ago

Yes but I’m saying that snapshot in September is incorrect. Why is that. Is it rigged?

[–] [email protected] 1 points 2 weeks ago

My experience is the same.

[–] pennomi 11 points 2 weeks ago

I think the next step in AI is learning how to control and direct the speech, rather than just make computers talk.

They are surprisingly good for being a mere statistical copycat of words on the internet. Whatever the second tier innovation is that jumps AI into true reasoning rather than pattern matching is going to be wild.

[–] TommySoda 4 points 2 weeks ago
[–] Hackworth 2 points 2 weeks ago* (last edited 2 weeks ago)

In its latest audit of 10 leading chatbots, compiled in September, NewsGuard found that AI will repeat misinformation 18% of the time

70% of the instances where AI repeated falsehoods were in response to bad actor prompts, as opposed to leading prompts or innocent user prompts.

[–] [email protected] 0 points 2 weeks ago

Shock pickachu