this post was submitted on 30 Sep 2023
1067 points (98.7% liked)

Open Source

31218 readers
573 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 1 year ago (2 children)
[–] [email protected] 43 points 1 year ago* (last edited 1 year ago)

More transparent about data collection, and less likely to reinforce biases. Mozilla vision for trustworthy AI

[–] [email protected] 29 points 1 year ago (1 children)
[–] [email protected] 16 points 1 year ago (3 children)

I feel the issue with AI models isn’t their source not being open but the actual derived model itself not being transparent and auditable. The sheer number of ML experts who cannot explain how their model produces a given result is the biggest concern and requires a completely different approach to model architecture and visualisation to solve.

[–] cynar 23 points 1 year ago

Unfortunately, the nature of the models is that it's very difficult to get an understanding of the innards. That's part of the point, you don't need too. The best we can do is monitor how it's built and what connects in and out of it.

The open source bits let you see that it's not passing data on without your permission. If the training data is also open source, you can get for biases e.g. 90% of faces being white males.

[–] [email protected] 6 points 1 year ago

No amount of ML expertise will let someone know how a model produced a result, exactly. Training the model from the data requires a lot of very delicate math being done uncountable times to get a model that results in something useful, and it simply isn't possible to comprehend how the work inside is done in a meaningful way other than by doing guesswork.

[–] [email protected] 2 points 1 year ago

We already know how these models fundamentally work. Why exactly does it matter how a model produced some result? /gen