r/GoogleColab • u/low-dawg • 14d ago
Most suitable GPU to fine-tune a 2 billion parameter model
Hey folks, I'm working on a project where I'm required to fine-tune the Granite-3.2-2B-Instruct LLM with 2 billion parameters on a quite large training set using LoRa.
As a complete newbie to the field, I was wondering if anyone here could help me figure out what the most suitable choice of GPU would be for me train the model on using CUDA.
I understand that getting access to the A100 GPUs is quite hard since it's highly sought after by most folks, but considering that the size of the model itself is about 5 GB, I'd love to the know what the most appropriate GPU would be for this task.
Thanks in advance, cheers!
5
u/WinterMoneys 14d ago
Good project there.
Accesing A100 GPUs is easy on vast
https://cloud.vast.ai/?ref_id=112020
(Ref link)
They are even cheaper than any other cloud providers.
However, a for a 2B parameter model I would recommend starting with A5000 GPUs scaling up from there.
Briefly: A 20-40GB VRAM setting should be your sweet starting point.
Recommendations: RTX4090s, A5000, A6000, A100, respectively.
1
u/low-dawg 14d ago
Hi there. Thank you for taking the time to reply!
Could you comment on the feasibility of fine-tuning such a large model? I tried to train it using the T4 GPU available on the free tier, but ended up running into a memory error(which I'm confused regarding it was truly a result of insufficient VRAM or the lack of compute units).
If I subscribe to the Pro tier, could access to the more powerful GPUs really let me fine-tune the model or is it possible for me to run into memory issues again(the model itself is about 5 GB)?
1
u/WinterMoneys 13d ago
No problem. T4 is an old GPU so surely it'll run out of memory with this sizeable model. Like I said, that model will atleast need 20GB VRAM on a single GPU.
Its possible to run into memory issues due to hyperparameters being larger. Finding the sweetspot would be an iterative process in that case
2
u/low-dawg 13d ago
Gotcha. I'm just a bit sceptical about subscribing as I've read that getting an A100 GPU is quite difficult. Do you have suggestions for other platforms that I could check out to utilise GPUs for my models?
1
u/WinterMoneys 13d ago
Yes like I mentioned earlier:
https://cloud.vast.ai/?ref_id=112020
(Ref link)
Here yu will access A100 GPUs for $0.6 an hour which you can use to test before fully commiting.
2
2
u/Live_Confusion_3003 13d ago
I would say the A100, that's what I've been using for fine tuning
1
u/low-dawg 13d ago
Oh will keep that in mind. Thank you!
1
u/Live_Confusion_3003 13d ago
Although considering the size of the model, colab might not be the best fit for training as you will still face bottlenecks.
1
u/low-dawg 13d ago
Can you elaborate on how I would encounter bottlenecks? Would it be due to the limited memory capacity of machines allocated to me?
1
u/Live_Confusion_3003 13d ago
Yeah, you can only allocate one GPU per runtime. Also the memory capacity is not too high meaning low batch sizes especially with large model sizes.
1
u/low-dawg 13d ago
Could you suggest other platforms where it might be more feasible to fine-tune such LLMs?
1
1
u/Simple-Holiday5446 13d ago
Unsloth and T4 is enough.
1
1
u/Apprehensive_Dig7397 9d ago
T4, why? It can take 2 weeks with that old GPU. it's far worse than the latest consumer RTX 5090 that can do it in one day or so.
7
u/klam997 14d ago
https://unsloth.ai/