this post was submitted on 24 Apr 2024
280 points (98.6% liked)

Technology

59698 readers
5170 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

“Life-and-death decisions relating to patient acuity, treatment decisions, and staffing levels cannot be made without the assessment skills and critical thinking of registered nurses,” the union wrote in the post. “For example, tell-tale signs of a patient’s condition, such as the smell of a patient’s breath and their skin tone, affect, or demeanor, are often not detected by AI and algorithms.”

“Nurses are not against scientific or technological advancement, but we will not accept algorithms replacing the expertise, experience, holistic, and hands-on approach we bring to patient care,” they added.

you are viewing a single comment's thread
view the rest of the comments
[–] EnderMB 27 points 7 months ago (1 children)

Way back in 2010 I did some paper reading at university on AI in healthcare, and even back then there were dedicated AI systems that could outperform many healthcare workers in the US and Europe.

Where many of the issues came were not in performance, but in liability. If a single person is liable, that's fine, but what if a computer program provides an incorrect dosage to an infant, or a procedure with two possible options goes wrong and a human would choose the other?

The problems were also painted as observational. Often, the AI would get things with a clear solution right far more, but would observe things far less. It basically had the same conclusions that many other industries have - AI can produce some useful tools to help humans, but using it to replace humans results in fuck-ups that make the hospital (more notably, it's leaders) liable.

[–] [email protected] 6 points 7 months ago

yes. ai is great is a helper or assistant but whatever it does always has to be doublechecked by a human. All the same humans can get tired or careless so its not bad having it as long as its purely supplemental.