this post was submitted on 19 Dec 2023
27 points (65.2% liked)

Autism

6868 readers
7 users here now

A community for respectful discussion and memes related to autism acceptance. All neurotypes are welcome.

We have created our own instance! Visit Autism Place the following community for more info.

Community:

Values

  • Acceptance
  • Openness
  • Understanding
  • Equality
  • Reciprocity
  • Mutuality
  • Love

Rules

  1. No abusive, derogatory, or offensive post/comments e.g: racism, sexism, religious hatred, homophobia, gatekeeping, trolling.
  2. Posts must be related to autism, off-topic discussions happen in the matrix chat.
  3. Your posts must include a text body. It doesn't have to be long, it just needs to be descriptive.
  4. Do not request donations.
  5. Be respectful in discussions.
  6. Do not post misinformation.
  7. Mark NSFW content accordingly.
  8. Do not promote Autism Speaks.
  9. General Lemmy World rules.

Encouraged

  1. Open acceptance of all autism levels as a respectable neurotype.
  2. Funny memes.
  3. Respectful venting.
  4. Describe posts of pictures/memes using text in the body for our visually impaired users.
  5. Welcoming and accepting attitudes.
  6. Questions regarding autism.
  7. Questions on confusing situations.
  8. Seeking and sharing support.
  9. Engagement in our community's values.
  10. Expressing a difference of opinion without directly insulting another user.
  11. Please report questionable posts and let the mods deal with it. Chat Room
  • We have a chat room! Want to engage in dialogue? Come join us at the community's Matrix Chat.

.

Helpful Resources

founded 1 year ago
MODERATORS
 

cross-posted from: https://lemmy.world/post/9724922

AI-screened eye pics diagnose childhood autism with 100% accuracy

all 16 comments
sorted by: hot top controversial new old
[–] Deestan 80 points 11 months ago (2 children)

A full 100% sounds weird. It means complete overlap with the ASD assessment which itself isn't bulletproof. Weird like there were some mistakes in the data. E.g. all ASD pictures taken on the same day and getting a date timestamp, "ASD" written in the metadata or filename, or different light in different lab.

I didn't see any immediate problems in the published paper, but if these were my results I'd be to worried to publish it.

[–] sosodev 57 points 11 months ago (1 children)

It sounds like the model is overfitting the training data. They say it scored 100% on the testing set of data which almost always indicates that the model has learned how to ace the training set but flops in the real world.

I think we shouldn’t put much weight behind this news article. This is just more overblown hype for the sake of clicks.

[–] [email protected] 9 points 11 months ago (1 children)

The article says they kept 15% of the data for testing, so it's not overfitting. I'm still skeptical though.

[–] sosodev 8 points 11 months ago (1 children)

I’m pretty sure it’s possible to overfit even with large testing sets.

[–] [email protected] 15 points 11 months ago

The paper mentioned how the images were processed (chopping 10% off some to remove name, age, etc). But all were from the same centre and only pixel data was used. Given the other work referenced on retinal thinning in ASD disorders, maybe it is a relatively simple task for this kind of model. But they do say using multi-centre images will be an important part of the validation. It’s quite possible the performance would drop away when differences in camera, etc. are factored in.

[–] sosodev 52 points 11 months ago* (last edited 11 months ago)

We need to be very careful with news outlets that focus on science hype. Often times they’re jumping to conclusions based on poorly written papers that have yet to be peer reviewed and reproduced.

Just take a look at the homepage of this website. They post several times a day with much of it being obvious clickbait backed by very little journalistic integrity.

[–] [email protected] 35 points 11 months ago (1 children)

False negatives is what freaks me out. Sorry bud, magic machine says you’re not really autistic. No adaptions or special school for you. Good luck out there.

[–] Shialac 8 points 11 months ago (1 children)

So just like it is without the AI?

[–] [email protected] 3 points 11 months ago

It's easier to reason with a doctor than a computer. I can imagine you'd be in the system for good after such "evaluation" so it could mean slim chances of retesting

[–] Bouchtroubouli 32 points 11 months ago

Well, 100% accuracy after removing all the noise in the dataset ...

At least it proves that their method can separate two extremes. But what about the real life where 90% of the people are ?

[–] [email protected] 25 points 11 months ago (1 children)

I am highly skeptical. My guess is that both the article and the research are highly flawed.

[–] RGB3x3 17 points 11 months ago (1 children)

Nothing in science is 100%. You could survey 100,000 people about what color the sky is and you wouldn't get 100% saying it's blue.

[–] z00s 8 points 11 months ago

Depends what time of day it is when you ask

[–] [email protected] 20 points 11 months ago

It worries me that this research came out of South Korea; A country which I've heard is particularly stigmatizing of neurodivergence

[–] SeeMinusMinus 6 points 11 months ago

What I want to see is how the test would go if there were people with other conditions as well. There is a good chance that it would easily misdiagnoses people if used outside of the context of just nt's and autistic people. There could be countless other conditions that also cause whatever the ai is seeing.