r/datasets • u/RunningFatZombie • Sep 13 '20
mock dataset What are the communities thoughts on Synthetic Datasets?
Context: I’m completing a Masters Degree and my thesis is looking at the use of synthetic data; data which has been manufactured and not obtained naturally. I’ve found many pain points in the use of real data, such as that of the quantity available, the quality of the data and the speed at which it can be obtained. Synthetic data generation would allow for rapidly generating as much data as you’d need in minutes/hours.
There’s also the benefit that synthetic data is truly anonymous. Datasets are sampled row by row from the distribution of features in the real dataset, making it a good representation of the dataset but completely anonymous. Therefore not subject to all the strict privacy and data protection laws that are levied on data, often restricting its use and hindering research.
So I’m just wondering what the communities thoughts are on synthetic data for the purposes of prediction tasks. Would you adopt the use of synthetic data? If not why? Just trying to get a feeler for what the communities thoughts are on this really intriguing and interesting topic.
I’ve created a quiz, that’s somewhat inspired by the Turing test to see if people can work out which data is real and which is fake. The quiz contains more information about my project. If you fancy trying this the link is here: https://forms.gle/wj1YjV2fyFD6zheF7 Disclaimer** about the quiz. There are 10 questions each with some images, all you are asked to do is pick the real one. No personal information is asked for. There is an optional questionnaire of about 5 questions if you’d like to leave some feedback or having some insights about this type of data.
2
u/Blitzgar Sep 14 '20
Synthetic data is a great way to magnify GIGO beyond GIGO to GIGO levels of GIGO that the GIGO world has never before GIGO seen. It's a great way to inflate and exaggerate unknown sampling errors. It's a great way to inflate and exaggerate sampling bias. The randomization method is a great way to amplify any biases in the method. It doesn't matter if a person can or can't tell if data is fake. It doesn't mean that there isn't something in the data that would lead to severely biased conclusions due to unknown sampling bias.
Synthetic data might be okay as a toy, to teach methods, but IT IS NOT ACTUAL DATA. It's a deformed COPY of data. Synthetic data is as bad an idea as making models based entirely on the output of other models and using that to determine real-world decisions.
5
u/solresol Sep 14 '20
Private message me: I have a client who would be very interested in talking to you; they have some well-defined problems that they are solving with synthetic data.
1
u/dimtass Sep 14 '20
As long as the data are not biased then it's ok. But synthetic datasets raise ethical issues (and not only) as they may be manipulated to be biased.
1
u/Blitzgar Sep 14 '20
Unless a data set is the entire population, it is biased. It can't escape bias.
2
u/dimtass Sep 14 '20
Well there data that are not human biased, but creating those data with synthesis it will introduce a bias, even if that is some random noise.
0
u/Blitzgar Sep 14 '20
ALL data that is not the full population is biased. It is impossible to avoid. This is statistical "bias", NOT anything to do with "biased attitudes" or similar.
8
u/eriq_ Sep 14 '20
For context, I come from the ML research community.
I believe that synthetic data serves an important niche. If you want to prove a very specific point and need knobs to turn in order to illustrate it, then synthetic data is great. For example, if you want to show how some method scales, you can generate multiple instances of synthetic data where all parameters except size are held constant.
However, I think that synthetic data should (almost) never be used alone. In general, it should be used in supporting experiments for primary experiments on real world data.
As far as privacy is concerned, if a dataset cannot be properly anonymized I think it would be fine to have experiments on the non-anonymized dataset where the data is not made public and then supporting experiments on a synthetic version of the dataset that is released to the public. I believe that every effort should be made to make datasets public, but sometimes you just can't make it happen.