Why not allow the user to provide the seed used for the generation. That way at least we can detect if the model has changed if the same prompt with the same seed suddenly gives a new answer (assuming they don't cache answers), you could compare different providers which supposedly use the same model, and if the model is open-weight you could even compare yourself on your own hardware or on rented gpus.
Something like a perplexity/log-likelihood measurement across a large enough number of prompts/tokens might get you the same in a statistical sense though. I expect those comparison percentages at the top are something like that.
I'm somehow more convinced by the method shown in the introduction of the article: run a number of evals across model providers, see how they compare. This also catches all other configuration changes an inference provider can make, like KV-cache quantization. And it's easy to understand, talk about, and the threat model is fairly clear (be wary of fixed answers to your benchmark if you're really distrustful)
Of course conceptually attestation is neat and wastes less compute with repeated benchmarks. It definitely has its place
This comes up so frequent that I’ve seen at least 3-4 different websites running daily benchmarks on providers and plotting their performance.
The last one I bookmarked has already disappeared. I think they’re generally vibe coded by developers who think they’re going to prove something but then realize it’s expensive to spend that money on tokens every day.
They also use limited subsets of big benchmarks because to keep costs down, which increases the noise of the results. The last time someone linked to one of the sites claiming a decline in quality looked like a noisy mostly flat graph that someone had put a regression line on that was very slightly sloping downward.
The title here seems very different from the post. All that verification happens locally only. There's no remote validation at any point. So I'm not sure what's the reason to even apply this check. If you're running the model yourself, you know what you're downloading and can check the hash once for transfer problems. Then you can do different things for preventing storage bitrot. But you're not proving anything to your users this way.
You'd need to run a full, public system image with known attestation keys and return some kind of signed response with every request to do that. Which is not impossible, but the remote part seems to be completely missing from the description.
The verification is not happening locally only. The client SDKs fetch the measurement of the weights (+ system software, inference engine) that are pinned to Sigstore, then grabs the same measurement (aka remote attestation of the full, public system image) from the running enclave, and checks that the two are exactly equal. Our previous blog explains this in more detail: https://tinfoil.sh/blog/2025-01-13-how-tinfoil-builds-trust
Call me an old fuddy-duddy, but my faith in the quality of your reporting really fell through the floor when I saw that the first image showed Spongebob Squarepants swearing at the worst-performing numbers.
EDUT: I read through the article, and it's a little over my head, but I'm intrigued. Does this actually work?
Is modelwrap running on arbitrary clients? I'm not following the whole post, but how are you able to maintain confidence in client-owned hardware/disks following the secure model the method seems to depdend on?
I don't understand what stops an inference provider from giving you a hash of whatever they want. None of this proves that's what they're running, it only proves they know the correct answer. I can know the correct answer all I want, and then just do something different.
Attestation always involves a "document" or a "quote" (two names for basically a byte buffer) and a signature from someone. Intel SGX & TDX => signature from intel. AMD SEV => signature from amd. AWS Nitro Enclaves => signature from aws.
Clients who want to talk to a service which has attestation send a nonce, and get back a doc with the nonce in it, and the clients have somewhere in them a hard coded certificate from Intel, AMD, AWS and they check that the doc has a good sig.
Yes, though I see the term abused often enough that it's not enough for me to believe it's sound just from the use of the term attestation. Nowadays "attestation" is simply slang for "validate we can trust [something]". I didn't see any mechanism described in the article to validate that the weights actually being used are the same as the weights that were hashed.
I would be interested to see Apple Silicon in the future, given its much stronger isolation and integrity guarantees. But that is an entirely different tech stack.
The idea is that you run a workload at a model provider, that might cheat on you by altering the model they offer, right? So how does this help? If the provider wants to cheat (they apparently do), wouldn't they be able to swap the modelwrap container, or maybe even do some shenanigans with the filesystem?
I am ignorant about this ecosystem, so I might be missing something obvious.
Why not allow the user to provide the seed used for the generation. That way at least we can detect if the model has changed if the same prompt with the same seed suddenly gives a new answer (assuming they don't cache answers), you could compare different providers which supposedly use the same model, and if the model is open-weight you could even compare yourself on your own hardware or on rented gpus.
AFAIK seed determinism can't really be relied upon between two machines, maybe not even between two different gpus.
Something like a perplexity/log-likelihood measurement across a large enough number of prompts/tokens might get you the same in a statistical sense though. I expect those comparison percentages at the top are something like that.
I'm somehow more convinced by the method shown in the introduction of the article: run a number of evals across model providers, see how they compare. This also catches all other configuration changes an inference provider can make, like KV-cache quantization. And it's easy to understand, talk about, and the threat model is fairly clear (be wary of fixed answers to your benchmark if you're really distrustful)
Of course conceptually attestation is neat and wastes less compute with repeated benchmarks. It definitely has its place
This comes up so frequent that I’ve seen at least 3-4 different websites running daily benchmarks on providers and plotting their performance.
The last one I bookmarked has already disappeared. I think they’re generally vibe coded by developers who think they’re going to prove something but then realize it’s expensive to spend that money on tokens every day.
They also use limited subsets of big benchmarks because to keep costs down, which increases the noise of the results. The last time someone linked to one of the sites claiming a decline in quality looked like a noisy mostly flat graph that someone had put a regression line on that was very slightly sloping downward.
The title here seems very different from the post. All that verification happens locally only. There's no remote validation at any point. So I'm not sure what's the reason to even apply this check. If you're running the model yourself, you know what you're downloading and can check the hash once for transfer problems. Then you can do different things for preventing storage bitrot. But you're not proving anything to your users this way.
You'd need to run a full, public system image with known attestation keys and return some kind of signed response with every request to do that. Which is not impossible, but the remote part seems to be completely missing from the description.
The verification is not happening locally only. The client SDKs fetch the measurement of the weights (+ system software, inference engine) that are pinned to Sigstore, then grabs the same measurement (aka remote attestation of the full, public system image) from the running enclave, and checks that the two are exactly equal. Our previous blog explains this in more detail: https://tinfoil.sh/blog/2025-01-13-how-tinfoil-builds-trust
Sorry it wasn’t clear from the post!
What prevents the provider from sending to the client an attestation of hardware state and actually running another?
Call me an old fuddy-duddy, but my faith in the quality of your reporting really fell through the floor when I saw that the first image showed Spongebob Squarepants swearing at the worst-performing numbers.
EDUT: I read through the article, and it's a little over my head, but I'm intrigued. Does this actually work?
Is modelwrap running on arbitrary clients? I'm not following the whole post, but how are you able to maintain confidence in client-owned hardware/disks following the secure model the method seems to depdend on?
I don't understand what stops an inference provider from giving you a hash of whatever they want. None of this proves that's what they're running, it only proves they know the correct answer. I can know the correct answer all I want, and then just do something different.
Attestation always involves a "document" or a "quote" (two names for basically a byte buffer) and a signature from someone. Intel SGX & TDX => signature from intel. AMD SEV => signature from amd. AWS Nitro Enclaves => signature from aws.
Clients who want to talk to a service which has attestation send a nonce, and get back a doc with the nonce in it, and the clients have somewhere in them a hard coded certificate from Intel, AMD, AWS and they check that the doc has a good sig.
Yes, though I see the term abused often enough that it's not enough for me to believe it's sound just from the use of the term attestation. Nowadays "attestation" is simply slang for "validate we can trust [something]". I didn't see any mechanism described in the article to validate that the weights actually being used are the same as the weights that were hashed.
There’s a few components that are necessary to make it work:
1. The provider open sources the code running in the enclave and pins the measurement to a transparency log such as Sigstore
2. On each connection, the client SDK fetches the measurement of the code actually running (through a process known as remote attestation)
3. The client checks that the measurement that the provider claimed to be running exactly matches the one fetched at runtime.
We explain this more in a previous blog: https://tinfoil.sh/blog/2025-01-13-how-tinfoil-builds-trust
What enclave are you using? Is it hardware-backed?
Edit: I found https://github.com/tinfoilsh/cvmimage which says AMD SEV-SNP / Intel TDX, which seems almost trustworthy.
Yes, we use Intel TDX/AMD SEV-SNP with H200/B200 GPUs configured to run in Nvidia Confidential Computing mode
I would be interested to see Apple Silicon in the future, given its much stronger isolation and integrity guarantees. But that is an entirely different tech stack.
In my opinion this is very well written
Two comments so far suggesting otherwise and I guess idk what their deal is
Attestation is taking off
The idea is that you run a workload at a model provider, that might cheat on you by altering the model they offer, right? So how does this help? If the provider wants to cheat (they apparently do), wouldn't they be able to swap the modelwrap container, or maybe even do some shenanigans with the filesystem?
I am ignorant about this ecosystem, so I might be missing something obvious.
The committed weights are open source and pinned to a transparency log, along with the full system image running in the enclave.
At runtime, the client SDK (also open source: https://docs.tinfoil.sh/sdk/overview) fetches the pinned measurement from Sigstore, and compares it to the attestation from the running enclave, and checks that they’re equal. This previous blog explains it in more detail: https://tinfoil.sh/blog/2025-01-13-how-tinfoil-builds-trust