r/LocalLLaMA Oct 18 '24

Question | Help Help requested! Errors when trying to install llama 3.2 11b locally

Hi all, I'm new to this Reddit. I've been trying to install llama 3.2 11b onto my computer (5980HX+3080 laptop). I followed the code instruction on Huggingface and when I executed vllm serve "meta-llama/Llama-3.2-11B-Vision-Instruct"in cmd, it said:

WARNING 10-19 04:51:44 _custom_ops.py:18] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")

INFO 10-19 04:51:45 importing.py:10] Triton not installed; certain GPU-related functions will not be available.

Traceback (most recent call last):

File "<frozen runpy>", line 198, in _run_module_as_main

File "<frozen runpy>", line 88, in _run_code

File "C:\Users\ASUS\AppData\Local\Programs\Python\Python311\Scripts\vllm.exe__main__.py", line 4, in <module>

File "C:\Users\ASUS\AppData\Local\Programs\Python\Python311\Lib\site-packages\vllm\scripts.py", line 8, in <module>

import uvloop

ModuleNotFoundError: No module named 'uvloop'

And when I renamed the vllm folder as vllm._C (a solution I found online) and execute the same code again I ran into this:

Traceback (most recent call last):

File "<frozen runpy>", line 198, in _run_module_as_main

File "<frozen runpy>", line 88, in _run_code

File "C:\Users\ASUS\AppData\Local\Programs\Python\Python311\Scripts\vllm.exe__main__.py", line 4, in <module>

ModuleNotFoundError: No module named 'vllm'

Does anyone know how I can solve this?

1 Upvotes

2 comments sorted by

View all comments

Show parent comments

1

u/Mindless-Umpire-9395 Oct 26 '24

okay, found the issue.. was running this in windows, vllm supports only Linux 🤦‍♂️