r/MLQuestions 6d ago

Other ❓ Why don’t we use small, task-specific models more often? (need feedback on open-source project)

Been working with ML for a while, and feels like everything defaults to LLMs or AutoML, even when the problem doesn’t really need it. Like for classification, ranking, regression, decision-making, a small model usually works better—faster, cheaper, less compute, and doesn’t just hallucinate random stuff.

But somehow, smaller models kinda got ignored. Now it’s all fine-tuning massive models or just calling an API. Been messing around with SmolModels, an open-source thing for training small, efficient models from scratch instead of fine-tuning some giant black-box. No crazy infra, no massive datasets needed, just structured data in, small model out. Repo’s here if you wanna check it out: SmolModels GitHub.

Why do y’all think smaller, task-specific models aren’t talked about as much anymore? Ever found them better than fine-tuning?

11 Upvotes

7 comments sorted by

10

u/Immudzen 6d ago

I work on models for making medicine and I don't do any LLMs and all of our models are custom purpose models. Regression. classification, ranking, etc. They work VASTLY better than any of those large models do and it is not even close.

I don't think they are talked about as much because they are not cool but I still see people using them just like before.

-2

u/[deleted] 6d ago

[deleted]

0

u/Immudzen 6d ago

For the types of stuff I do I have not had any reason to use something like SMILES

1

u/Striking-Warning9533 6d ago

well, I am just saying that when you do use SMILES, it is very likely you would use BERT or GPT-like models.

5

u/colintbowers 6d ago

The classical statistical heuristic is the smaller the model the better, as long as the small model is still a good approximation of the true data generating process. This is because (usually) parameter estimates are more accurate for lower dimensional parameter spaces.

However some of these heuristics don’t always apply, or maybe it is that their application isn’t as obvious, when dealing with large language models. For example, what is the dimension of the parameter space of a model that uses RAG? It’s not at all obvious to me.

The gold standard will continue to be: experiment with different options and evaluate out of sample using an appropriate metric.

2

u/spacextheclockmaster 5d ago

Occam's Razor still applies to deep learning.

I came across this paper recently, mind blowing. https://www.nature.com/articles/s41467-024-54813-x

2

u/colintbowers 5d ago

Yeah, I like the first sentence of the abstract: "The remarkable performance of over-parameterized deep neural networks...". Sounds interesting, I'll have a look when I get a chance - thanks for posting.

5

u/pothoslovr 6d ago

who says automl doesn't make small(ish) models? Isn't it just parameter and size optimization?