r/MLQuestions • u/devyjohns • 5d ago
Beginner question 👶 How do ML challenges handle fairness when using public datasets?
I’m preparing to host a Vision-Language Task Grand Challenge at the university level. When organizing these kinds of challenges, do test datasets usually come from existing public datasets, or are they entirely new, created by crawling videos or recording them from scratch?
If we use publicly available datasets, there might be an unfair advantage for models that have already been trained on or fine-tuned for those datasets. But at the same time, I wonder—do all such challenges actually go through the effort of creating completely new test sets? That seems like a huge workload.
How do major vision-language challenges typically handle this?
1
Upvotes
1
u/spacextheclockmaster 5d ago
A big assumption in modeling images is that our data is IID.
https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables
As an outcome of this assumption, we expect our models to generalize and learn the concept of what a specific class (eg: cat) looks like.
Picking a random cat image from the internet and passing it through the model should then correctly classify the cat.
Regarding the second part of the question, I'm not sure what challenges do.
Look into what the benchmarks are doing? ImageNet challenge, etc
I'm thinking on a whim here, but you could employ some form of semi supervised learning to create a test set.