this post was submitted on 18 Dec 2023
114 points (75.0% liked)

Technology

59159 readers
2449 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

AI-screened eye pics diagnose childhood autism with 100% accuracy::undefined

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 50 points 10 months ago (6 children)

100% ? That's a fucking lie. Nothing in life is 100%

[–] scorpious 24 points 10 months ago

Are you 100% sure of that?

[–] [email protected] 13 points 10 months ago (3 children)

A convolutional neural network, a deep learning algorithm, was trained using 85% of the retinal images and symptom severity test scores to construct models to screen for ASD and ASD symptom severity. The remaining 15% of images were retained for testing.

It correctly identified 100% of the testing images. So it's accurate.

[–] Wogi 28 points 10 months ago (1 children)

100% accuracy is troublesome. Literally statistics 101 stuff, they tell you in no uncertain terms, never, never trust 100% accuracy.

You can be certain to some value of p. That number is never 0. .001 is suspicious as fuck, but doable. .05 is great if you have a decent sample size.

They had fewer than 1000 participants.

I just don't trust it. Neither should they. Neither should you. Not at least until someone else recreates the experiments and also finds this AI to be 100% accurate.

[–] [email protected] 20 points 10 months ago (4 children)

What they're saying, as far as I can tell, is that after training the model on 85% of the dataset, the model predicted whether a participant had an ASD diagnosis (as a binary choice) 100% correctly for the remaining 15%. I don't think this is unheard of, but I'll agree that a replication would be nice to eliminate systemic errors. If the images from the ASD and TD sets were taken with different cameras, for instance, that could introduce an invisible difference in the datasets that an AI could converge on. I would expect them to control for stuff like that, though.

[–] dragontamer 13 points 10 months ago* (last edited 10 months ago) (1 children)

I would expect them to control for stuff like that, though.

What was the problem with that male vs female deep-learning test a few years ago?

That all the males were earlier in the day, so the sun angle in the background was a certain direction, while all the females were later in the day, so the sun was in a different angle? And so it turned out that the deep-learning AI was just trained on the window in the background?

100% accuracy almost certainly means this kind of effect happened. No one gets perfect, all good tests should be at least a "little bit" shoddy.

[–] [email protected] 2 points 10 months ago

Definitely possible, but we'll have to wait for some sort of replication (or lack of) to see, I guess.

[–] BreadstickNinja 7 points 10 months ago

Yeah, exactly. They're reporting findings. Saying that it worked in 100% of the cases they tested is not making a claim that it will work in 100% of all cases ever. But if they had 30 images and it classified all 30 images correctly, then that's 100%.

The article headline is what's misleading. First, it's poorly written - "AI-screened eye PICS DIAGNOSE childhood autism." The pics do not diagnose the autism, so the subject of the verb is wrong. But even if it were rephrased, stating that the AI system diagnoses autism itself is a stretch. The AI system correctly identified individuals previously diagnosed with autism based on eye pictures.

This is an interesting but limited finding that suggests AI systems may be capable of serving as one diagnostic tool for autism, based on one experiment in which they performed well. Anything more than that is overstating the findings of the study.

[–] dirtdigger 3 points 10 months ago (1 children)

You need to report two numbers for a classifier, though. I can create a classifier that catches all cases of autism just by saying that everybody has autism. You also need a false positive rate.

[–] [email protected] 4 points 10 months ago (1 children)

True, but as far as I can tell the AUROC measure they refer to incorporates both.

[–] dirtdigger 2 points 10 months ago

Yup, you're right, good catch 🙂

[–] Bgugi 3 points 10 months ago (1 children)

They talk about collecting the images - the two populations of images were collected separately. It's probably not 100% of the difference, but it might have been enough to push it up to 100%

[–] [email protected] 3 points 10 months ago

You mean like the infamous AI model for detecting skin cancers that they figured out was simply detecting if there's a ruler in the photo because in all of the data fed into it the skin cancer photos had rulers and the control images did not

[–] [email protected] 7 points 10 months ago (1 children)

Then somebody's lying with creative application of 100% accuracy rates.

The confidence interval of the sequence you describe is not 100%

[–] [email protected] 10 points 10 months ago

From TFA:

For ASD screening on the test set of images, the AI could pick out the children with an ASD diagnosis with a mean area under the receiver operating characteristic (AUROC) curve of 1.00. AUROC ranges in value from 0 to 1. A model whose predictions are 100% wrong has an AUROC of 0.0; one whose predictions are 100% correct has an AUROC of 1.0, indicating that the AI’s predictions in the current study were 100% correct. There was no notable decrease in the mean AUROC, even when 95% of the least important areas of the image – those not including the optic disc – were removed.

They at least define how they get the 100% value, but I'm not an AIologist so I can't tell if it is reasonable.

[–] foolinthemaking 0 points 10 months ago

Yeah, from the way they wrote, it sounds to me they indirectly trained on the test set

[–] kromem 8 points 10 months ago

Other aspects weren't 100%, such as identifying the severity (which was around 70%).

But if I gave a model pictures of dogs and traffic lights, I'd not at all be surprised if that model had a 100% success rate at determining if a test image was a dog or a traffic light.

And in the paper they discuss some of the prior research around biological differences between ASD and TD ocular development.

Replication would be nice and I'm a bit skeptical about their choice to use age-specific models given the sample size, but nothing about this so far seems particularly unlikely to continue to show similar results.

[–] VelvetStorm 7 points 10 months ago

Not even your statement?

[–] piecat 5 points 10 months ago

Could we reasonably expect an AI to something right 100% if a human could do it with 100%?

Could you tell if someone has down syndrome pretty obviously?

Maybe some kind of feature exists that we aren't aware of

[–] Gigan 4 points 10 months ago (1 children)
[–] surewhynotlem 9 points 10 months ago

taxes

  • Only if you're poor