This is a guest post written by Inference Labs. You can see their version of the post here.
From Web3 and Web2 platforms to traditional brick-and-mortar businesses, every domain we navigate is shaped by rigorously engineered incentive systems that structure trust, value, and participation. Now player 2 has entered the chat — AI Agents. As they join, how do we ensure open and fair participation for all? From “Truth Terminal” to emerging AI Finance (AiFi) systems, the core solution lies in implementing robust verification primitives.
Isn’t this still subject to the same problem, where a system can lie about its inference chain by returning a plausible chain which wasn’t the actual chain used for the conclusion? (I’m thinking from the perspective of a consumer sending an API request, not the service provider directly accessing the model.)
Also:
Any time I see a highly technical post talking about AI and/or crypto, I imagine a skilled accountant living in the middle of mob territory. They may not be directly involved in any scams themselves, but they gotta know that their neighbors are crooked and a lot of their customers are gonna use their services in nefarious ways.
The model that is doing the inference is committed to before hand (it’s hashed) so you can’t lie about what model produced the inference. That is how ezkl, the underlying library, works.
I know a lot of people in this cryptography space, and there are definitely scammers across the general “crypto space”, but in the actual cryptography space most people are driven by curiosity or ideology.
Ahh, ya, so this is a deep rabbit hole but I will try to explain best I can.
Zero knowledge is a cryptographic way of proving that some computation was done correctly. This allows you to “hide” some inputs if you want.
In the context of the “ezkl” library, this allows someone to train a model and publicly commit to it by posting a hash of the model somewhere, and someone else can run inference on that model, and what comes out is the hash of the model and the output of the inference along with a cryptographic “proof” that anyone can verify that the computation was indeed done with that model and the result was correct, but the person running the inference could hide the input.
Or let’s say you have a competition for whoever can train the best classifier for some specific task. I could train a model and when I run it the test set inputs could be public, and I could “hide” the model but the zk computation would still reveal the hash of the model. So let’s say I won this competition, I could at the end reveal the model that I tried, and anyone would be able to check that the model I revealed and the model that was ran that beat everyone else was in fact the same model.
Isn’t this still subject to the same problem, where a system can lie about its inference chain by returning a plausible chain which wasn’t the actual chain used for the conclusion? (I’m thinking from the perspective of a consumer sending an API request, not the service provider directly accessing the model.)
Also:
Any time I see a highly technical post talking about AI and/or crypto, I imagine a skilled accountant living in the middle of mob territory. They may not be directly involved in any scams themselves, but they gotta know that their neighbors are crooked and a lot of their customers are gonna use their services in nefarious ways.
The model that is doing the inference is committed to before hand (it’s hashed) so you can’t lie about what model produced the inference. That is how ezkl, the underlying library, works.
I know a lot of people in this cryptography space, and there are definitely scammers across the general “crypto space”, but in the actual cryptography space most people are driven by curiosity or ideology.
I appreciate the reply! And I’m sure I’m missing something, but… Why can’t you just lie about the model you used?
Ahh, ya, so this is a deep rabbit hole but I will try to explain best I can.
Zero knowledge is a cryptographic way of proving that some computation was done correctly. This allows you to “hide” some inputs if you want.
In the context of the “ezkl” library, this allows someone to train a model and publicly commit to it by posting a hash of the model somewhere, and someone else can run inference on that model, and what comes out is the hash of the model and the output of the inference along with a cryptographic “proof” that anyone can verify that the computation was indeed done with that model and the result was correct, but the person running the inference could hide the input.
Or let’s say you have a competition for whoever can train the best classifier for some specific task. I could train a model and when I run it the test set inputs could be public, and I could “hide” the model but the zk computation would still reveal the hash of the model. So let’s say I won this competition, I could at the end reveal the model that I tried, and anyone would be able to check that the model I revealed and the model that was ran that beat everyone else was in fact the same model.