this post was submitted on 15 Apr 2024
57 points (74.4% liked)

Technology

34995 readers
284 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
all 32 comments
sorted by: hot top controversial new old
[–] Ranvier 69 points 7 months ago* (last edited 7 months ago) (2 children)

It's just a multiple choice test with question prompts. This is the exact sort of thing an LLM should be very good at. This isn't chat gpt trying to do the job of an actual doctor, it would be quite abysmal at that. And even this multiple choice test had to be stacked in favor of chat gpt.

Because GPT models cannot interpret images, questions including imaging analysis, such as those related to ultrasound, electrocardiography, x-ray, magnetic resonance, computed tomography, and positron emission tomography/computed tomography imaging, were excluded.

Don't get me wrong though, I think there's some interesting ways AI can provide some useful assistive tools in medicine, especially tasks involving integrating large amounts of data. I think the authors use some misleading language though, saying things like AI "are performing at the standard we require from physicians," which would only be true if the job of a physician was filling out multiple choice tests.

[–] [email protected] 11 points 7 months ago

I, too, can pass the Boards if you remove all the questions I don't understand.

[–] [email protected] 8 points 7 months ago

I’d be fine with LLMs being a supplementary aid for medical professionals, but not with them doing the whole thing.

[–] Etterra 28 points 7 months ago (4 children)

I wonder why nobody seems capable of making a LLM that knows how to do research and cite real sources.

[–] NosferatuZodd 15 points 7 months ago (1 children)

I mean LLMs pretty much just try to guess what to say in a way that matches their training data, and research is usually trying to test or measure stuff in reality and see the data and try to find conclusions based on that so it doesn't seem feasible for LLMs to do research

They maybe used as part of research but it can't do the whole research as a crucial part of most research would be the actual data and you'd need a LOT more than just LLMs to get that

[–] BigMikeInAustin 11 points 7 months ago

Yup! LLMs don't put facts together. They just look for patterns, without any concept of what they are looking at.

[–] [email protected] 8 points 7 months ago (2 children)

Have you ever tried Bing Chat? It does that. LLMs that do websearches and make use of the results are pretty common now.

[–] [email protected] 7 points 7 months ago* (last edited 7 months ago) (2 children)

Bing uses ChatGPT.

Despite using search results, it also hallucinates, like when it told me last week that IKEA had built a model of aircraft during World War 2 (uncited).

I was trying to remember the name of a well known consumer goods company that had made an aircraft and also had an aerospace division. The answer is Ball, the jar and soda can company.

[–] [email protected] 1 points 7 months ago

I had it tell me a certain product had a feature it didn’t and then cite a website that was hosting a copy of the user manual… that didn’t mention said feature. Having it cite sources makes it way easier to double check if it’s spewing bullshit though

[–] [email protected] 0 points 7 months ago

Yes, but it shows how an LLM can combine its own AI with information taken from web searches.

The question I'm responding to was:

I wonder why nobody seems capable of making a LLM that knows how to do research and cite real sources.

And Bing Chat is one example of exactly that. It's not perfect, but I wasn't claiming it was. Only that it was an example of what the commenter was asking about.

As you pointed out, when it makes mistakes you can check them by following the citations it has provided.

[–] [email protected] 7 points 7 months ago* (last edited 7 months ago) (2 children)

Because the inherent design of modern AIs is not deterministic.

Adding a progressively bigger model cannot fix that. We need an entirely new approach to AI to do that.

[–] [email protected] 1 points 7 months ago

Bigger models do start to show more emergent intelligent properties and there are components being added to the LLM to make them more logical and robust. At least this is what OpenAI and others are saying about even bigger datasets.

[–] [email protected] -1 points 7 months ago

For me the biggest indicator that we've barking up the wrong tree is energy consumption.

Consider the energy required to feed a human with that required to train and run the current "leading edge" systems.

From a software development perspective, I think machine learning is a very useful way to model unknown variables, but that's not the same as "intelligence".

[–] BetaDoggo_ 3 points 7 months ago

Cohere's command-r models are trained for exactly this type of task. The real struggle is finding a way to feed relevant sources into the model. There are plenty of projects that have attempted it but few can do more than pulling the first few search results.

[–] [email protected] 26 points 7 months ago (2 children)

What would be much more useful is to provide a model with actual patient files and see what kills more people, doctors or models.

[–] [email protected] 27 points 7 months ago (2 children)
[–] satanmat 9 points 7 months ago (2 children)

Like “Is it Cake”

But life or death is on the line….

“Is it Lupus?” Or “Are you Dying?”

[–] [email protected] 4 points 7 months ago

Hypochondriac worst nightmare drama show.

[–] [email protected] 3 points 7 months ago (1 children)

You just described "House M.D."

[–] satanmat 2 points 7 months ago (1 children)

Well YEAH… it’s never Lupus…

[–] [email protected] 2 points 7 months ago

Except when it was lupus!

[–] [email protected] 2 points 7 months ago (1 children)

After hitting submit I realised that the word "model" was ambiguous, but after considering that for a moment, I realised that I am okay with that.

Nothing like a little ambiguity to keep people smiling..

[–] BigMikeInAustin 1 points 7 months ago* (last edited 7 months ago)

Supposedly lots of models of G.I. Joe are up for doing rectal exams.

[–] [email protected] 4 points 7 months ago

GPT will require every test and yet for the sake of authenticity randomly perform medical errors.

[–] [email protected] 19 points 7 months ago* (last edited 7 months ago)

All these always do the same thing.

Researchers reduced [the task] to producing a plausible corpus of text, and then published the not-so-shocking results that the thing that is good at generating plausible text did a good job generating plausible text.

From the OP , buried deep in the methodology :

Because GPT models cannot interpret images, questions including imaging analysis, such as those related to ultrasound, electrocardiography, x-ray, magnetic resonance, computed tomography, and positron emission tomography/computed tomography imaging, were excluded.

Yet here's their conclusion :

The advancement from GPT-3.5 to GPT-4 marks a critical milestone in which LLMs achieved physician-level performance. These findings underscore the potential maturity of LLM technology, urging the medical community to explore its widespread applications.

It's literally always the same. They reduce a task such that chatgpt can do it then report that it can do to in the headline, with the caveats buried way later in the text.

[–] Poe 10 points 7 months ago

Neat but I don't think LLMs are the way to go for these sort of things

[–] [email protected] 2 points 7 months ago

The 17th percentile in peds is not surprising. The model mixing it's training data with adults would absolutely kill someone.

[–] [email protected] 0 points 7 months ago (1 children)

Google started killing the Dr industry (gp) Ai will finally be the nail in the coffin except Drs will never give up the power to prescribe

[–] BigMikeInAustin 8 points 7 months ago (1 children)

LLMs can't design experiments or think of consequences or quality of life.

They also don't "learn" from asking questions or from a 1-time input. They need to see hundreds or thousands of people die from something to recognize the pattern of something new.

[–] [email protected] -2 points 7 months ago (1 children)

Yeah but they can give the common answers of bed rest and hydration that is a drs go too for every thing.

I imagine a future where LLM take over the menial up duties of you have a cold, you have high blood pressure etc.

So actual Drs spend more time doing less menial tasks.

But since as a society we develop automation and fire everyone around it i cant see it really happening

[–] BigMikeInAustin 2 points 7 months ago

In a society that valued preventative healthcare, people would get deep scans regularly when healthy, and an AI would take up the menial work of sifting through the large amount of extra data to detect issues early. Theoretically an AI would give the same amount of attention to the first scan of the day as the last scan of a 12 hour day.