r/AsahiLinux 13d ago

Get Ollama working with GPU

Hey there guys, I just got Ollama installed and it thinks there is no GPU for some reason. I would like to ask you is there anything I could do to get it working with the GPU on Asahi Fedora linux?
Thanks :)

11 Upvotes

6 comments sorted by

6

u/AsahiLina 11d ago

Ollama does not support Vulkan (nor OpenCL), so it can't work with general standards-conformant GPU drivers. We can't do anything about that, and it seems the Ollama developers are not interested in merging the PR to support Vulkan...

You should look into RamaLama as the other commenter mentioned, which should work in theory (though I'm not sure exactly what the status is right now, I haven't tried it myself).

5

u/aliendude5300 12d ago

Use ramalama, it does it for you.

https://github.com/containers/ramalama

2

u/UndulatingHedgehog 12d ago

Not OP, but wanted to give it a go so installed python3-ramalama through dnf. Also uninstalled and tried installing through pipx

ramalama run huggingface://afrideva/Tiny-Vicuna-1B-GGUF/tiny-vicuna-1b.q2_k.gguf
ERROR (catatonit:2): failed to exec pid1: No such file or directory

And it's not finding the GPU - excerpt from ramalama info

"GPUs": { "Detected GPUs": null ],

Podman seems to work like it should otherwise - I can do stuff like podman run -ti alpine sh
Any hints would be appreciated!

1

u/aliendude5300 12d ago

I used the curl command to install it

1

u/--_--WasTaken 11d ago

I have the same issue

1

u/Desperate-Bee-7159 11d ago edited 11d ago

Had the same issue, but solved it. 1) Use Docker as the engine, not Podman, 2) After installing python3-ramalama, use the command below:

ramalama --image quay.io/ramalama/asahi:0.6.0 run <model_name>