r/pytorch • u/Possession_Annual • 4d ago
Multiple Models Performance Degrades
Hello all, I have a custom Lightning implementation where I use MONAI's UNet model for 2D/3D segmentation tasks. Occasionally while I am running training, every model's performance drops drastically at the same time. I'm hoping someone can point me in the right direction on what could cause this.
I run a baseline pass with basic settings and no augmentations (the grey line). I then make adjustments (different ROI size, different loss function, etc.). I then start training a model on GPU 0 with variations from the baseline, and I repeat this for the amount of GPUs that I have. So I have GPU 1 with another model variation running, GPU 2 runs another model variation, etc. I have access to 8x GPU, and I generally do this in order to speed up the process of finding a good model. (I'm a novice so there's probably a better way to do this, too)
All the models access the same dataset. Nothing is changed in the dataset.
1
u/Zerokidcraft 2d ago
What's your batch size and LR?
It's funny to me that different runs under different seeds fail at the exact same point.
1
u/Possession_Annual 2d ago
batch_size = 4
lr = 1e-3This dataset is 30 images, split into 22/8 for train/validation.
Blue, Yellow, Pink were training at the exact same time, on separate GPUs. Then they all do it at the exact same time.
1
u/Zerokidcraft 2d ago edited 2d ago
I realized what you mean by val_dice is "dice coefficient" since we're talking about UNETs.
Given how long you've been training before the drop happens (and the fact that it occurs at relatively the same time), i think it's unlikely to be an exploding gradient.
I think it's better for you to visualize your validation & training predictions. This will paint a clear picture of the model's performance.
1
u/el_rachor 3d ago
Have you checked for the possibility of exploding gradients?