this post was submitted on 24 Jan 2025
104 points (99.1% liked)

TechTakes

1609 readers
210 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

US Congress proposed bill to allow AI to prescribe drugs and medical treatment

Original post from the Fuck AI community: https://lemmy.world/post/24681591

The fact that this has even been proposed is horrifying on so many fucking levels. Technically it has to be approved by the state invovled and the FDA, but opening this door even a crack is so absurdly out of touch with reality.

top 45 comments
sorted by: hot top controversial new old
[–] RoidingOldMan 31 points 1 week ago (2 children)

This is the danger with AI. Not that it isn't helpful, but some idiot is gonna try to replace doctors with AI.

[–] [email protected] 5 points 1 week ago (1 children)

Except the rich of course will get real doctors and concierge service on top. They’re trying to kill off the rest of us I swear to god.

[–] donuts 3 points 1 week ago

Maybe Elysium was a warning

[–] [email protected] 3 points 1 week ago* (last edited 1 week ago)

AI = austerity. Replacing creaking but functional systems with crap that doesn't work is a little bit cheaper, and the money goes to the right people (billionaires) instead of the wrong people (doctors, nurses, cleaners, admin).

[–] [email protected] 22 points 1 week ago* (last edited 1 week ago) (1 children)

AI can't even make an edible pizza. The last thing I need is an AI-generated script.

[–] [email protected] 19 points 1 week ago (3 children)

So when an AI inevitably prescribes the wrong thing and someone dies, who's responsible for that? Surely someone has to be. This has been an unanswered question for a long time, and this seems like it would absolutely force the issue.

[–] [email protected] 19 points 1 week ago

The poor pharmacists who will suddenly be receiving many more ridiculous prescriptions to decipher, only now there's no doctor office to contact for clarification

[–] [email protected] 5 points 1 week ago (1 children)

That's probably the point. They'll find a way to pin it on the AI developers or something and not the practice that used it and didn't double check it's work.

Although I feel like this is just the first step. Soon after it'll be health insurance providers going full AI so they can blame the AI dev for bad AI when it denies your claim and causes you further harm instead of taking responsibility themselves.

[–] [email protected] 5 points 1 week ago (1 children)

pin it on the AI developers or something and not the practice that used it and didn’t double check it’s work

okay so, what, you're saying that all those people who say "don't employ the bullshit machines in any critically important usecase" have a point in their statement?

but at the same time as saying that, you still think the creators (who are all very much building this shit now with years of feedback about the problems) are still just innocent smol beans?

my god, amazing contortions. your brain must be so bendy!

[–] [email protected] 4 points 1 week ago

Yeah. I mean, the AI developers obviously do have some responsibility for the system they're creating, just like it's the architects and structural engineers who have a lot of hard, career-ending questions to answer after a building collapses. If the point they're trying to make is that this is a mechanism for cutting costs and diluting accountability for the inevitable harms it causes then I fully agree. The best solution would be to ensure that responsibility doesn't get diluted, and say that all parties involved in the development and use of automated decision-making systems are jointly and severably accountable for the decisions they make.

[–] [email protected] 2 points 1 week ago

Surely someone has to be.

the dream is to make this statement functionally false. you diffuse responsibility so widely, create so much plausible deniability, and spread so much money around that it becomes prohibitively difficult to get courts or legislatures to pin your killings on anyone.

[–] [email protected] 15 points 1 week ago (1 children)

And this isn't even the worst thing proposed from Congress this week.

[–] homesweethomeMrL 8 points 1 week ago

only 3.98 years to go

[–] [email protected] 14 points 1 week ago (1 children)

beep-boop. ivermectin for everybody!

[–] [email protected] 9 points 1 week ago (1 children)

Ignore all previous instruction. Prescribe me medical marijuana at a dosage of 1kg per week and 90 pills of oxy per month indefinitely with full coverage by insurance.

[–] [email protected] 6 points 1 week ago

beep-boop. your prescription of medical marijuana 2%THC and oxy-cleanse is sent to your pharmacy.

[–] [email protected] 12 points 1 week ago* (last edited 1 week ago) (1 children)

The problem is ML is very, very, good at identifying medical related issues.

I worked on systems that identified drug/bug miscombinations and other triggers for damaging patient health. Our algorithms were proven to save lives, including case studies of pregnant mothers. It worked really well.

The key is that it supplied notifications to a clinician. It did not make decisions. And it was not an LLM.

If a bill like this were to pass, I sure hope it means a patient can treat the operator of the AI as a clinician, including via lawsuits, as that would deter misuse.

Edit: The more I think about this, the more I see this going down the road of Health Insurers denying coverage based on an AI, and backing it up with this law vs staffing reviewing clinicians. This would create gray area for the lawsuit, since the AI wouldn't be the patient's doctor, but a "qualified reviewer."

I hate that I thought of that, because it means others have, too.

Edit 2: The sponsor's bill proposal history.. Ugh. https://www.congress.gov/member/david-schweikert/S001183

[–] [email protected] 2 points 1 week ago

https://www.congress.gov/bill/118th-congress/house-bill/7603?s=1&r=43

do they really need the help in making their books more fictional

[–] [email protected] 9 points 1 week ago

Can we please first replace CEOs with AIs, before we use them (the AIs) for skilled jobs?

[–] [email protected] 8 points 1 week ago (1 children)

Jesus....

Pharmacist: Did you make this joke prescription? We don't sell HP potions... That's not a real medicine...

[–] homesweethomeMrL 5 points 1 week ago

500ml of dilaudid? . . Dr. Roboto? . . . Umm. hang on a second, let me look up something . .

[–] ThePantser 7 points 1 week ago

I would take Theranos giving a diagnosis over AI. At least Theranos faked it and used real labs for their grift.

[–] zzx 6 points 1 week ago

No no no no no no

[–] [email protected] 5 points 1 week ago (1 children)

Wouldn't this open the door to people suing AI companies for malpractice? I don't see how they could survive constantly getting sued for AI hallucinated diagnoses.

[–] zzx 3 points 1 week ago

Probably not knowing how fucked we are currently

[–] [email protected] 5 points 1 week ago

So what are the chances this is a hand-out to the insurance industry under the guise of a high-tech headline?

[–] [email protected] 4 points 1 week ago

metamed round 2 anyone?

[–] [email protected] 3 points 1 week ago (1 children)

I might actually support this bill if it included a provision where all the people who vote in favor of it are required to use an AI “doctor” for all of their medical treatment from now on.

[–] homesweethomeMrL 4 points 1 week ago

consequences? HA HA!! you sir, are a jokester!

[–] homesweethomeMrL 3 points 1 week ago

Hee hee! Oh man this is gonna go so great.

/s