this post was submitted on 16 Sep 2024
1 points (100.0% liked)

Canada

7163 readers
200 users here now

What's going on Canada?



Communities


🍁 Meta


🗺️ Provinces / Territories


🏙️ Cities / Regions


🏒 SportsHockey

Football (NFL)

  • List of All Teams: unknown

Football (CFL)

  • List of All Teams: unknown

Baseball

Basketball

Soccer


💻 Universities


💵 Finance / Shopping


🗣️ Politics


🍁 Social & Culture


Rules

Reminder that the rules for lemmy.ca also apply here. See the sidebar on the homepage:

https://lemmy.ca


founded 3 years ago
MODERATORS
 

Inside a bustling unit at St. Michael's Hospital in downtown Toronto, one of Shirley Bell's patients was suffering from a cat bite and a fever, but otherwise appeared fine — until an alert from an AI-based early warning system showed he was sicker than he seemed.

While the nursing team usually checked blood work around noon, the technology flagged incoming results several hours beforehand. That warning showed the patient's white blood cell count was "really, really high," recalled Bell, the clinical nurse educator for the hospital's general medicine program.

The cause turned out to be cellulitis, a bacterial skin infection. Without prompt treatment, it can lead to extensive tissue damage, amputations and even death. Bell said the patient was given antibiotics quickly to avoid those worst-case scenarios, in large part thanks to the team's in-house AI technology, dubbed Chartwatch.

"There's lots and lots of other scenarios where patients' conditions are flagged earlier, and the nurse is alerted earlier, and interventions are put in earlier," she said. "It's not replacing the nurse at the bedside; it's actually enhancing your nursing care."

top 6 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 1 month ago (1 children)

This is how you get worse racism in hospitals

[–] [email protected] 0 points 1 month ago (1 children)
[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

Black people are more likely to die (due to systemic racism), so AI says: save the white person.

We saw this a lot at the height of the pandemic, which is why many nurses argued that the best triage method was random selection.

As always the problem isn't inherently that AI exists. The problem is that humans trust its output and use that to make decisions (and the laws still allow them to do it in many jurisdictions).

[–] [email protected] 0 points 1 month ago (1 children)

That warning showed the patient’s white blood cell count was “really, really high,” recalled Bell, the clinical nurse educator for the hospital’s general medicine program.

I'm not a doctor, but even an idiot would know when a WBC is "really, really high" and assume infection. I mean, shit, "suffering from a cat bite and a fever, but otherwise appeared fine "... um, a cat bite AND A FEVER... red flag!

“It’s not replacing the nurse at the bedside; it’s actually enhancing your nursing care.”

I would argue that this would make nurses less important, and would make them "lazy" by not giving them opportunities to identify these simple things on a regular basis.

Would a nurse who doesn't know what a very high WBC entails be paid less? I would think so.

I can see AI/machine learning used in very complex cases where a human HCP would simply not have the number-crunching capability to find a diagnosis, but this was not that case.

[–] [email protected] -1 points 1 month ago* (last edited 1 month ago)

I would argue that this would make nurses less important, and would make them "lazy" by not giving them opportunities to identify these simple things on a regular basis.

Not just nurses, but doctors too. This exact problem was discussed at a conference I recently attended. Some doctors do better with AI assistance, some do worse. As far as we know, it seems to be dependent on how much they "believe in AI". The more they do, the worse they perform when assisted.

[–] [email protected] -1 points 1 month ago

This is exactly what we want machine learning to do, analyze existing data and quickly report to a human with what it found.

Generative LLM's are garbage, analyzing with machine learning aids is useful.