this post was submitted on 14 Feb 2025
34 points (88.6% liked)
Technology
62316 readers
5245 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Hey can someone dumb down the dumbed down explanation for me please?
AI is a magical black box that performs a bunch of actions to produce an output. We can’t trust what a developer says the black box does inside without it being completely open source (including weights).
This is a concept for a system where the actions performed can be proved to those who don’t have visibility inside the box to trust the box is doing what it is saying it’s doing.
An AI enemy that can prove it isn’t cheating by providing proof of the actions it took. In theory.
Zero Knowledge Proofs make a lot of sense for cryptography but in a more abstracted sense like this, it still relies on a lot of trust that the implementation generates proofs for all actions.
Whenever I see Web3, I personally lose any faith in whatever is being presented or proposed. To me, blockchain is an impressive solution to no real problem (except perhaps border control / customs).
Zk in this context allows someone to be able to thoroughly test a model and publish the results with proof that the same model was used.
Blockchain for zk-ml is actually a great use case for 2 reasons:
The way AI is trained today creates a black box solution, the author says only the developers of the model know what goes on inside the black box.
This is major pain point in AI, where we are trying to understand it so we can make it better and more reliable. The author mentions that unless AI companies open source their work, it's impossible for everyone else to 'debug' the circuit.
Zero knowledge proofs are how they are trying to combat this, using mathematical algorithms they are trying to verify the output of an AI model in real time, without having to know the underlying intellectual property.
This could be used to train AI further and increase the reliability of AI drastically, so it could be used to make more important decisions and adhere much more easily to the strategies for which they are deployed.
Thanks for the 'for dummies' explanation.