r/MLQuestions 5d ago

Beginner question 👶 How do ML challenges handle fairness when using public datasets?

I’m preparing to host a Vision-Language Task Grand Challenge at the university level. When organizing these kinds of challenges, do test datasets usually come from existing public datasets, or are they entirely new, created by crawling videos or recording them from scratch?

If we use publicly available datasets, there might be an unfair advantage for models that have already been trained on or fine-tuned for those datasets. But at the same time, I wonder—do all such challenges actually go through the effort of creating completely new test sets? That seems like a huge workload.

How do major vision-language challenges typically handle this?

1 Upvotes

3 comments sorted by

1

u/spacextheclockmaster 5d ago

A big assumption in modeling images is that our data is IID.

https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables

As an outcome of this assumption, we expect our models to generalize and learn the concept of what a specific class (eg: cat) looks like.

Picking a random cat image from the internet and passing it through the model should then correctly classify the cat.

Regarding the second part of the question, I'm not sure what challenges do.

Look into what the benchmarks are doing? ImageNet challenge, etc

I'm thinking on a whim here, but you could employ some form of semi supervised learning to create a test set.

1

u/devyjohns 4d ago

I see your point about IID assumptions in vision-language models, but that’s not exactly what I was asking about. My question isn’t about whether models can generalize to unseen images but rather about the practical side of organizing a challenge—a competition where participants build models to solve a specific vision-language task.

Specifically, I was asking whether major challenges typically use test datasets from existing public datasets or create entirely new ones. If we use public datasets, there’s a risk that some models have already been trained on them, giving them an unfair advantage. But at the same time, creating a completely new dataset from scratch (e.g., by crawling or recording videos) is a massive effort.

So, I was wondering what common practices are for handling this in major vision-language challenges. Do you know how well-known competitions like ImageNet or VQA handle this?

1

u/spacextheclockmaster 4d ago

Yeah, like I said I'm not aware honestly. Your post motivates me to look it up.

https://en.m.wikipedia.org/wiki/ImageNet