this post was submitted on 26 Aug 2023
400 points (85.7% liked)

Technology

59740 readers
3958 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women's Hospital found that cancer treatment plans generated by OpenAI's revolutionary chatbot were full of errors.

you are viewing a single comment's thread
view the rest of the comments
[–] zeppo 223 points 1 year ago (8 children)

I’m still confused that people don’t realize this. It’s not an oracle. It’s a program that generates sentences word by word based on statistical analysis, with no concept of fact checking. It’s even worse that someone actually did a study instead of simply acknowledging or realizing that ChatGPT is happy to just make stuff up.

[–] Zeth0s 32 points 1 year ago (1 children)

Publish or perish, that's why

[–] [email protected] 5 points 1 year ago

I'm trying really hard for the latter.

[–] [email protected] 21 points 1 year ago

Yeah this stuff was always marketed to automate simple and repetitive things we do daily. it's mostly the media I guess who started misleading everyone into thinking this was AI like skynet. It's still useful, not just as a all knowing AI god

[–] inspxtr 16 points 1 year ago (2 children)

while I agree it has become more of a common knowledge that they’re unreliable, this can add on to the myriad of examples for corporations, big organizations and government to abstain from using them, or at least be informed about these various cases with their nuances to know how to integrate them.

Why? I think partly because many of these organizations are racing to adopt them, for cost-cutting purposes, to chase the hype, or too slow to regulate them, … and there are/could still be very good uses that justify it in the first place.

I don’t think it’s good enough to have a blanket conception to not trust them completely. I think we need multiple examples of the good, the bad and the questionable in different domains to inform the people in charge, the people using them, and the people who might be affected by their use.

Kinda like the recent event at DefCon trying to exploit LLMs, it’s not enough we have some intuition about their harms, the people at the event aim to demonstrate the extremes of such harms AFAIK. These efforts can help inform developers/researchers to mitigate them, as well as showing concretely to anyone trying to adopt them how harmful they could be.

Regulators also need these examples in specific domains so they may be informed on how to create policies on them, sometimes building or modifying already existing policies of such domains.

[–] zeppo 10 points 1 year ago

This is true and well-stated. Mainly what I wish people would understand is there are current appropriate uses, like 'rewrite my marketing email', but generating information that could result in great harm if inaccurate is an inappropriate use. It's all about the specific model, though - if you had a ChatGPT system trained extensively on medical information, it would result in greater accuracy, but still the information would need expert human review before any decision were made. Mainly I wish the media had been more responsible and accurate in portraying these systems to the public.

[–] [email protected] 7 points 1 year ago

I don’t think it’s good enough to have a blanket conception to not trust them completely.

On the other hand, I actually think we should, as a rule, not trust the output of an LLM.

They’re great for generative purposes, but I don’t think there’s a single valid case where the accuracy of their response should be outright trusted. Any information you get from an AI model should be validated outright.

There are many cases where a simple once-over from a human is good enough, but any time it tells you something you didn’t already know you should not trust it and, if you want to rely on that information, you should validate that it’s accurate.

[–] iforgotmyinstance 11 points 1 year ago (2 children)

I know university professors struggling with this concept. They are so convinced using an LLM is plagiarism.

It can lead to plagiarism if you use it poorly, which is why you control the information you feed it. Then proofread and edit.

[–] zeppo 12 points 1 year ago

Another related confusion in academia recently is the 'AI detector'. It could easily be defeated with minor rewrites, if they were even accurate in the first place. My favorite misconception is there was a story of a professor who told students "I asked ChatGPT if it wrote this, and it said yes" which is just really not how it works.

[–] [email protected] 4 points 1 year ago (1 children)

I can understand the plagiarism argument, though you have to extend the definition of it. If I am expected to write an essay, but I use ChatGPT instead, then I am fraudulently presenting the work as my own. Plagiarism might not be the right word, or maybe it's a case where language is going to evolve so that plagiarism includes passing off AI generated work as your own. Either way it's cheating unless I was specifically allowed to use AI.

[–] iforgotmyinstance 0 points 1 year ago (1 children)

If the argument and the sources are incongruous, that isn't the fault of the LLM/AI. That's the authors fault for not proofreading and editing.

You assume an inherent morality of LLMs but they are amoral constructs. They are tools, and you limit yourself by not learning them.

[–] [email protected] 0 points 1 year ago

I didn't say anything about the sources being incongruent? That's a completely separate issue. We were talking about plagiarism.

I don't understand the morality comment either, I didn't ascribe any morality to AI, I was talking about whether using them fits the definition of plagiarism or not.

If you are expected to write it yourself, and you use an LLM to generate it, then that's cheating in my opinion. Yes, of course we shoukd learn to use AI, but if you are told to do something and you get a person or LLM to do it for you, then you didn't complete the task as you were told. And at university that can have consequences.

[–] fubo 8 points 1 year ago (5 children)

It’s even worse that someone actually did a study instead of simply acknowledging or realizing that ChatGPT is happy to just make stuff up.

Sure, the world should just trust preconceptions instead of doing science to check our beliefs. That worked great for tens of thousands of years of prehistory.

[–] zeppo 28 points 1 year ago* (last edited 1 year ago) (1 children)

It's not merely a preconception. It's a rather obvious and well-known limitation of these systems. What I am decrying is that some people, from apparent ignorance, think things like "ChatGPT can give a reliable cancer treatment plan!" or "here, I'll have it write a legal brief and not even check it for accuracy". But sure, I agree with you, minus the needless sarcasm. It's useful to prove or disprove even absurd hypotheses. And clearly people need to be definitely told that ChatGPT is not always factual, so hopefully this helps.

[–] adeoxymus 8 points 1 year ago (1 children)

I'd say that a measurement always trumps arguments. At least you know how accurate they are, this statement cannot follow from reason:

The JAMA study found that 12.5% of ChatGPT's responses were "hallucinated," and that the chatbot was most likely to present incorrect information when asked about localized treatment for advanced diseases or immunotherapy.

[–] zeppo 5 points 1 year ago

That's useful. It's also good to note that the information the agent can relay depends heavily on the data used to train the model, so it could change.

[–] PetDinosaurs 9 points 1 year ago

Why the hell are people down voting you?

This is absolutely correct. We need to do the science. Always. Doesn't matter what the theory says. Doesn't matter that our guess is probably correct.

Plus, all these studies tell us much more than just the conclusion.

[–] [email protected] 7 points 1 year ago

"After an extensive three-year study, I have discovered that touching a hot element with one's bare hand does, in fact, hurt."

"That seems like it was unnecessary..."

"Do U even science bro?!"

Not everything automatically deserves a study. Were there any non-rando people out there claiming that ChatGPT could totally generate legit cancer treatment plans that people could then follow?

[–] Takumidesh 7 points 1 year ago

It's not even a preconception, it's willful ignorance, the website itself tells you multiple times that it is not accurate.

The bottom of every chat has this text: "Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts. ChatGPT August 3 Version"

And when you first use it, a modal pops up explaining the same thing.

[–] Windex007 3 points 1 year ago

ChatGPT isn't some newly discovered sentient species.

It's a machine designed and built by human engineers.

This is like suggesting that we study fortune cookies to see if they can accurately forecast the future. The manufacturer can simply tell you the limitation of their product... Being that they can not divine the future.

[–] dual_sport_dork 5 points 1 year ago* (last edited 1 year ago)

This is why without some hitherto unknown or so far undeveloped capability of these sorts of LLM models, they'll never actually be useful for performing any kind of mission critical work. The catch-22 is this: You can't trust the AI to produce correct work without some kind of potentially dangerous, showstopping, or embarassing error. This isn't a problem if you're just, say, having it paint pictures. Or maybe even helping you twiddle the CSS on your web site. If there is a failure here, no one dies.

But what if your application is critical to life or safety? Like prescribing medical care, or designing a building that won't fall down, or deciding which building the drone should bomb. Well, you have to get a trained or accredited professional in whatever field we're talking about to check all of its work. And how much effort does that entail? As it turns out, pretty much exactly as much as having said trained or accredited professional do the work in the first place.

[–] [email protected] 3 points 1 year ago

true, I tried to explain this to my parents because they were scared of it and they seemed skeptical.

[–] [email protected] -1 points 1 year ago

But it’s supposed to be the future! I want the full talking spaceship like in Star Trek, not this … “initial learning steps” BS!

I was raised on SciFi and am now mad that I don’t have all the cool made up things from those shows/movies!