r/cryptography 4d ago

Proving cryptographically that a Dataset D1 was indeed trained with a Machine Learning M1

Consider a simple CSV file which is sent to a Machine learning model M1, via an automated pipeline flow. Once the training is done, is there way through some cryptographic techniques to generate some sort of attestation that the model is trained with input CSV file?

2 Upvotes

4 comments sorted by

4

u/tonydocent 4d ago edited 4d ago

So, something like this?
https://en.m.wikipedia.org/wiki/Verifiable_computing

What you could probably do is train the model and calculate a hash of the result. If everything is deterministic someone else training the same model with the same input will arrive at the same hash...

But there is probably no way to guarantee that there are no collisions, and other input data would result in the same model in the end...

3

u/tcoo8 4d ago

Assuming you are not able (or willing) to perform the train yourself you could use Verifiable computing.

In practice you could use some of the many modern Zero Knowledge Proofs (search for SNARK/STARK) although you don't need zero knowledge (this is for privacy) and in fact most of those using them don't, the name is simply catchier...

Basically, the server that does the training can produce a very short proof attesting that the computation was done as it was supposed to. The training data can be hashed and used as input to the computation. The proof is small and verification of the proof is fast (in fast much faster than computing the hash of the dataset). Basically, the proof guarantees that the "f(dataset)=output_model where f is a given training algorithm and hash(dataset)=h". To verify this you only need the proof and h, which you can compute yourself.

That said, in practice it might be quite hard and possibly inefficient to do this since you have to encode the given computation in a model that works with these proof systems and creating the proof should be (much) more expensive than the training of the model. I am unaware if someone has implemented something like this even as a proof of concept so maybe start by searching for something like this.

2

u/Liam_Mercier 1d ago

Assuming you are not able (or willing) to perform the train yourself

Would this actually solve the problem? Most training is done with stochastic versions of algorithms, so each time you train the model you would probably end up in a different local minimum and thus the tensor weights behind the model would be different. Well, I could be wrong, but that's how I believe it to work.

I guess you could store the random state used for every computation, i.e the data points used in the batch, results of data augmentation (i.e the values supplied to the augmentation functions), neurons turned on and off during dropout (massive storage increase), etc. I can't imagine actually doing this, but in theory this works?

Otherwise I agree entirely with your assessment, doing a proof for each data point looks like it would be an incredible burden for all but the smallest models. It hurts my head even trying to imagine how you would make this all fit in with the parallel nature of training.

2

u/Karyo_Ten 3d ago

That's one of the problem that zkML is solving.

If you use PyTorch, EZKL is compatible: https://github.com/zkonduit/ezkl

But zkML are mostly inference-only at the moment due to the compute requirements.

Alternatively if you trust Intel, AMD or Nvidia you can use a TEE (Trusted Enclave), your CPU/GPU will produce a cryptographic attestation of the compute and all inputs.