this post was submitted on 10 Sep 2023
87 points (79.6% liked)

Technology

59737 readers
3604 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

As AI capabilities advance in complex medical scenarios that doctors face on a daily basis, the technology remains controversial in medical communities.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 24 points 1 year ago* (last edited 1 year ago) (6 children)

A few things, that's an abysmal rate when it comes to people's health. A doctor with that success rate would be sued into next century. The rate dropped further when it came to differential diagnosis, implying chat gpt was leaving out important rarer possibilities. Often doctors work by starting with the most common and narrow down from there after repeated rounds of testing if it ends up being something uncommon, but one of their primary jobs is also thinking about rarer dangerous stuff that can mimic more common things and must be ruled out immediately.

Most importantly, the information fed into this was optimized with accurate descriptive medical terminology. This is a language that, in general, patients do not speak. People can also describe things very differently, for instance a patient saying something is weak when a doctor may say no that's numb not weak or visa versa. And dizzy could mean just about anything. Someone typing their own story directly into chat gpt is going to get much worse results than this without someone to interpret the word choices and ask the right questions that people may not even realize are important.

Anyways, the possibilities of AI use in Healthcare is interesting, but disappointing it does worse the less common things get and is bad at a differential diagnosis, the areas that would be really helpful as an aid to diagnosis. Some other areas to think about though could be maybe as a front end to find clinical trials with the us gov database, which can be hard to browse, or maybe streamlining the endless insurance paperwork. I'd be surprised insurance companies don't use something similar already.

[–] [email protected] 9 points 1 year ago (1 children)

Don't forget the inherent biases that are introduced with AI training! Women especially have a history of having their symptoms dismissed out of hand - if the LLM training data includes these biases, in combination with the bad diagnosis women could be really screwed.

[–] inspxtr 3 points 1 year ago

similarly to people from different races/countries … it’s not only that their conditions might vary and require more data, it is also that some communities don’t visit/trust hospitals to even have their data collected to be in the training set. Or they can’t afford to visit.

Sometimes, people from more vulnerable communities (eg LGBT) might prefer not to have such data collected in the first place, making data sparser.

load more comments (4 replies)