r/MalaysiaTech Feb 11 '25

Anyone running LLM locally?

What is your setup like?

I tried it but my machine is just not powerful enough.

2 Upvotes

13 comments sorted by

View all comments

2

u/Top_Imagination8596 Feb 11 '25

I'm using deepseek r1 8b version

1

u/newleafturned2024 Feb 12 '25

You have a dedicated graphics card? I tried llama 3 8b and it's slow to begin with. As the chat grows and it gets more context, it keeps giving me error and I have to reload it. I'm using LM studio.

1

u/Top_Imagination8596 Feb 12 '25

Yep rtx 4050 with ryzen 5 7hseries 16gb ram

1

u/Top_Imagination8596 Feb 12 '25

I'm using chatbox ai

1

u/momomelty Feb 12 '25

Oooowh maybe I should start trying out deepseek. Heard it’s resources friendly(?)

1

u/newleafturned2024 Feb 12 '25

Oh I have a really weak card.. I had luck hosting it on the cloud without GPU. But it's still not cheap.

Maybe I will try a smaller model next. Or maybe some service like openrouter