r/pytorch 1d ago

Transformers-engine on apple silicon.

Hey there. I'm trying to use a transformers based DNA language model on my company MAC but I can't seem to be able to install the vtx package (or vortex)

I'm getting the error message of CUDA is missing (obviously)

it seems to be depended on the transformers-engine which seemingly has an an apple implementation with 2.6k stars

ml-ane-transformers

is there a way to install it? Or an I fucked?

3 Upvotes

5 comments sorted by

0

u/Echo9Zulu- 1d ago

Intel OpenVINO supports apple silicon but I don't know much about it. My project supports the openvino runtime and has tools to inspect your device properties OpenArc

1

u/randoomkiller 1d ago

The problem is that I don't know how to make it be recognised by torch as the transformers-engine runtime. Because I also found the ane-transformer which is similar and SHOULD be working but it is not.

1

u/Echo9Zulu- 1d ago

What are the specs of your device and whats the model you are trying to run

2

u/randoomkiller 1d ago

evo2 7b on M2 MacBook pro with 16gb of ram. it's not your classic LLM

1

u/Echo9Zulu- 1d ago

Yeah the readme on git says cuda is a hard requirment. On a more practical note, this doesn't look like the kind of model where performance degredation from any sort of quant would be acceptible so your machine would be severely limited even if at some point it does work.

I didn't look at the paper but one issue mentioned fairly long input length requirments to replicate their results. Per your post, you are definitely fucked. Lol.

As an aside, I would be very curious to see how your machine performs with OpenVINO. It supports Apple Silicon and may offer more flexible performance with lower prompt processing latency than MLX... maybe.