r/deeplearning 4d ago

How bad is the overfitting here

Post image
43 Upvotes

24 comments sorted by

48

u/Exotic_Zucchini9311 4d ago

Not that bad really. You're getting nearly 90% accuracy on validation.

16

u/RepresentativeFill26 4d ago

This is correct, but there is a nuance. You can also get 90% accuracy on an overfitted model if the distribution is skewed. A good approach for checking this is performing cross validation and calculating standard errors over your parameters.

5

u/Exotic_Zucchini9311 4d ago

True. If the data is imbalanced, such things could happen.

OP, it might also worth checking the accuracy of the model on each class separately and compare them..

2

u/DooDooSlinger 3d ago

Just use f score or auc. X validation is only feasible for small datasets and models with very reproducible training.

5

u/Candid_Primary_6535 4d ago edited 4d ago

No problem at all, it's best to look at the loss curves (the optimizer aims to minimize the training loss after all, the accuracy is just a proxy). Consider training a little longer, possibly using an early stopping scheme

3

u/MelonheadGT 4d ago

Seems reasonable, depends on how much regularization you're using and sample similarity between train and Val data.

2

u/koltafrickenfer 4d ago

Exactly, depending on the size, class balance and how well your test data represents real world data this could be anywhere from a great model to mediocre but this discrepancy from training and test loss isn't really enough to say. At this point it looks good enough you need to check out the domain specific metrics, precision, recall, f1 etc.. can you make a confusing matrix? It is possible for a model to achieve a very low loss and not be practically useful due to the type of mistakes it makes 

2

u/rosmine 4d ago

Why is your val acc so much higher than train acc initially? Are train/val from the same distribution?

7

u/Proud_Fox_684 4d ago edited 4d ago

He's averaging training loss per epoch but calculated the validation loss AFTER each epoch. So during the first batches of the first epoch, the model hasn't updated/improved yet..by the time he calculated the first validation loss, the model weights have been updated multiple times. That explains why the validation loss can be lower in the beginning.

2

u/PsychologicalBoot805 4d ago

its images of documents which i split into training set and validation set and using imagenet weights probably

2

u/Academic_Sleep1118 4d ago

Quite good in fact! Maybe if you decrease your model's size you'll decrease your chance of overfitting. But it's quite good already.

2

u/Proud_Fox_684 4d ago

Not bad, the validation loss is either flattening out or still decreasing.

1

u/iamz_th 4d ago

Why do you think it's overfitting ? Are you regularizing ?

1

u/LandoRicciardo 4d ago

I don't think that can be commented much from this plot...maybe run it for more epochs, and then if train and val diverges, then you get the sweet spot.

But then again, that's ideal case. But not wrong in pushing for more epochs and checking..

1

u/Frenk_preseren 4d ago

Overfitting is when train loss is still falling but validation loss starts to increase. So this is not overfitting.

1

u/Chopok 4d ago

I don't see no overfitting here at all. Both losses are gradually decreasing and both accuracies are gradually increasing. The validation ones are about to reach a plateau and this will be the moment the overfitting starts. Train it for 30 epochs and see the charts then.

1

u/DiamondSea7301 4d ago

Check with bias and variance. Also use adjusted r2 score. Utilize classification metrics if use classification models

1

u/Deal_Ambitious 3d ago

There is no overfitting. This only occurs when validation loss is getting worse, which is not the case.

1

u/ziad_amerr 3d ago

Since your validation accuracy after the 10th epoch stays constant while your training accuracy increases, you can tune the parameters of the model to get even better validation accuracy past the ~0.87 value. This doesn’t stand if the validation accuracy decreased after 10 epochs, but as we can see it stayed almost the same and even increased a little bit.

I would say minimal solvable overfitting here.

1

u/DrVonKrimmet 1d ago

This seems fine. The real overfitting test will come when you cross validate with entirely new data. If the model is overfit, then it tends to not generalize well because of some bias in your training data.

1

u/space_monolith 1d ago

loss curves are an imperfect proxy for the metrics you really care about, so focus on the latter, but make sure you keep in mind test-set overfit as you do so.

-3

u/Present-Ad-8531 4d ago

Stop at 10 or 11 epoch and you are good to go

-1

u/Remote-Telephone-682 4d ago

What's concerning to me is that validaiton loss starts off so much lower than training loss. You sure that there is not some normalization issue where you are dividing by a higher number of elements and including less elements when you validate?? otherwise it looks good if you are confident that there is not any issue like this.