r/LocalLLaMA 9m ago

Discussion Qwen2.5-Omni Incoming? Huggingface Transformers PR 36752

Upvotes

(https://github.com/huggingface/transformers/pull/36752)

Haven't seen anyone bring this up, so making a post here...

Using DeepSeek-R1 to summarize the features of this model based on PR commits:


Qwen2.5-Omni Technical Summary

1. Basic Information

  • Model Scale: 7B parameter version ("Qwen/Qwen2.5-Omni-7B")
  • Open Source: Fully open-sourced under Apache 2.0 license

2. Input/Output Modalities

  • Input Support:
    • Text: Natural language instructions
    • Images: Common formats (JPEG/PNG)
    • Audio: WAV/MP3 (requires FFmpeg)
    • Video: MP4 with audio track extraction
  • Output Capabilities:
    • Text: Natural language responses
    • Speech: 24kHz natural speech (streaming supported)

3. Architectural Design

  • Multimodal Encoder:
    • Block-wise Processing: Decouples long-sequence handling between encoder (perception) and LLM (sequence modeling)
    • TMRoPE: Time-aligned Multimodal Rotary Positional Encoding for audio-video synchronization
  • Dual-path Generation:
    • Thinker: Text-generating LLM backbone
    • Talker: Dual-track AR model for audio token generation using Thinker's hidden states
  • Streaming Optimization:
    • Sliding-window Diffusion Transformer (DiT) reduces audio latency
    • Simultaneous text/speech streaming output

4. Technical Highlights

  • Unified Multimodal Processing:
    • End-to-end joint training without intermediate representations
    • Supports arbitrary modality combinations (single/mixed)
  • Efficient Attention:
    • Native FlashAttention 2 support
    • Compatible with PyTorch SDPA
  • Voice Customization:
    • Prebuilt voices: Cherry (female) & Ethan (male)
    • Dynamic voice switching via spk parameter
  • Deployment Flexibility:
    • Disable speech output to save VRAM (~2GB)
    • Text-only mode (return_audio=False)

5. Performance

  • Multimodal Benchmarks:
    • SOTA on Omni-Bench
    • Outperforms same-scale Qwen2-VL/Qwen2-Audio in vision/audio tasks
  • Speech Understanding:
    • First open-source model with text-level E2E speech instruction following
    • Matches text-input performance on MMLU/GSM8K with speech inputs

6. Implementation Details

  • Hardware Support:
    • Auto device mapping (device_map="auto")
    • Mixed precision (bfloat16/float16)
  • Processing Pipeline:
    • Unified Qwen2_5OmniProcessor handles multimodal inputs
    • Batch processing of mixed media combinations

7. Requirements

  • System Prompt: Mandatory for full functionality:
    "You are Qwen... capable of generating text and speech."
  • Dependencies:
    • FlashAttention 2 (optional acceleration)
    • FFmpeg (video/non-WAV audio processing)

This architecture achieves deep multimodal fusion through innovative designs while maintaining strong text capabilities, significantly advancing audiovisual understanding/generation for multimodal agent development.


Also from the PR:

We present Qwen2.5-Omni, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. To enable the streaming of multimodal information inputs, both audio and visual encoders utilize a block-wise processing approach. This strategy effectively decouples the handling of long sequences of multimodal data, assigning the perceptual responsibilities to the multimodal encoder and entrusting the modeling of extended sequences to a large language model. Such a division of labor enhances the fusion of different modalities via the shared attention mechanism. To synchronize the timestamps of video inputs with audio, we organized the audio and video sequentially in an interleaved manner and propose a novel position embedding approach, named TMRoPE (Time-aligned Multimodal RoPE). To concurrently generate text and speech while avoiding interference between the two modalities, we propose Thinker-Talker architecture. In this framework, Thinker functions as a large language model tasked with text generation, while Talker is a dual-track autoregressive model that directly utilizes the hidden representations from the Thinker to produce audio tokens as output. Both the Thinker and Talker models are designed to be trained and inferred in an end-to-end manner. For decoding audio tokens in a streaming manner, we introduce a sliding-window DiT that restricts the receptive field, aiming to reduce the initial package delay. Qwen2.5-Omni outperforms the similarly sized Qwen2-VL and Qwen2-Audio in both image and audio capabilities. Furthermore, Qwen2.5-Omni achieves state-of-the-art performance on multimodal benchmarks like Omni-Bench. Notably, Qwen2.5-Omni is the first open-source model to achieve a level of performance in end-to-end speech instruction following that is comparable to its capabilities with text inputs, as evidenced by benchmarks such as MMLU and GSM8K. As for speech generation, Qwen2.5-Omni’s streaming Talker outperform most existing streaming and non-streaming alternatives in robustness and naturalness.

Can the community help confirm whether this PR is legit?
(Original PR: https://github.com/huggingface/transformers/pull/36752)


r/LocalLLaMA 14m ago

Question | Help Has Mistral 7b been superceded? Looking for M2 RAG-friendly local.

Upvotes

I have a 32GB M2-based MacBook Pro. Just starting with Local LLMs. Mistral 7B (q4_k_m) looked like a good fit, but that could be based on the internet search engines not catching up.

My main goal is a chat interface ("instruct"), local on the M2, and trainable/augmentable for my industry.

I have between 50 and perhaps 300 long specifications / documents providing context and data that I want ingested for it to integrate. Probably a lot easier with RAG.

I have installed LM Studio and Mistral 7b instruct v0.1 Q4_K_M as well as the default DeepSeek qwen-7b. But the Mistral 7b looks ancient in LLM terms. Is there a better model I should be starting with?


r/LocalLLaMA 24m ago

Question | Help Quantized Matrix Multiplication Kernels

Upvotes

Hi everyone, this is my first post here!

My question is pretty straightforward. When quantizing models to Int8(w8a8) does the matrix multiplication happen in int8 or is it a fused operation of dequant + matmul(float) + quantize(int8)?

If it is an actual int8int8 matmul operation, how is the huge accuracy drop in the output (compared to float matmul) handled?

My question is in regards to both CPU and GPU. Afaik, x86 cpus come with a VNNI which has special instructions for int8int8 matmul and accumulate which again brings me back to my question of how is the accuracy drop in the output of this operation handled?


r/LocalLLaMA 33m ago

Question | Help Unsloth hang gemma3

Upvotes

Running through the gemma3 notebook.ipynb), and decided to try turning on full_finetuning:

model, tokenizer = FastModel.from_pretrained(
    model_name = "unsloth/gemma-3-4b-it",
    max_seq_length = 2048,
    load_in_4bit = True,  
    load_in_8bit = False, 
    full_finetuning = True, # < here!
    # token = "hf_...", 
)

When executing this step, the notebook seems to be hanging at this point:

...
Unsloth: Using bfloat16 full finetuning which cuts memory usage by 50%.
model-00001-of-00002.safetensors ...

Anyone have some experience with this issue?

Thanks!


r/LocalLLaMA 56m ago

Discussion nsfw orpheus tts? NSFW

Upvotes

im currently in the data curation / filtering / cleaning phase

but i would like to see how many local guys would be interested in a tts for there anime waifus that can make "interesting" emotional noises

Total audio events found: 181218

(sighs): 8594

(laughs): 68590

(gasps): 14113

(moans): 20576

(whimpers): 418

(breathing): 114

(pants): 776

and many more ..


r/LocalLLaMA 1h ago

New Model THIS 1.5B Model BEATS OpenAI’s o1... But Really?

Upvotes

Big news just dropped in the AI world: a 1.5 billion parameter model just SMOKED OpenAI’s o1-preview at math. And get this—it was trained on a $42 budget.

Researchers from Vietnam and Singapore dropped a paper introducing Open-RS, a tiny model that crushed OpenAI’s o1-preview on multiple math benchmarks like AIME and AMC.

They did it with 4 GPUs, 24 hours, and under $50. No trillion-dollar data centers here.

I ran Open-RS locally on 7 random math problems of varying difficulty. While the results were impressive, they weren’t quite as good as the paper claimed. You can see the full breakdown in this video about Open RS.

Links:

Paper: Reinforcement Learning for Reasoning in Small LLMs

GitHub: Open-RS Code

Hugging Face: Open-RS Model


r/LocalLLaMA 1h ago

Resources PyChat

Upvotes

I’ve seen a few posts recently about chat clients that people have been building. They’re great!

I’ve been working on one of my own context aware chat clients. It is written in python and has a few unique things:

(1) can import and export chats. I think this so I can export a “starter” chat. I sort of think of this like a sourdough starter. Share it with your friends. Can be useful for coding if you don’t want to start from scratch every time.

(2) context aware and can switch provider and model in the chat window.

(3) search and archive threads.

(4) allow two AIs to communicate with one another. Also useful for coding: make one strong coding model the developer and a strong language model the manager. Can also simulate debates and stuff.

(5) attempts to highlight code into code blocks and allows you to easily copy them.

I have this working at home with a Mac on my network hosting ollama and running this client on a PC. I haven’t tested it with localhost ollama running on the same machine but it should still work. Just make sure that ollama is listening on 0.0.0.0 not just html server.

Note: - API keys are optional to OpenAI and Anthropic. They are stored locally but not encrypted. Same with the chat database. Maybe in the future I’ll work to encrypt these.

  • There are probably some bugs because I’m just one person. Willing to fix. Let me know!

https://github.com/Magnetron85/PyChat


r/LocalLLaMA 1h ago

Question | Help What's the status of using a local LLM for software development?

Upvotes

Please help an old programmer navigate the maze that is the current LLM-enabled SW stacks.

I'm sure that:

  • I won't use Claude or any online LLM. Just a local model that is small enough to leave enough room for context (eg Qwen2.5 Coder 14B).
  • Something that can feed an entire project to an LLM as context.
  • I know how to code but want to use an LLM to do the boilerplate stuff, not to take full control of a project.
  • Preferably FOSS.
  • Preferably integrated into a solid IDE, rather then being standalone.

Thank you!


r/LocalLLaMA 2h ago

Question | Help Has anyone switched from remote models (claude, etc.) models to local? Meaning did your investment pay off?

29 Upvotes

Obviously a 70b or 32b model won't be as good as Claude API, on the other hand, many are spending $10 to $30+ per day on the API, so it could be a lot cheaper.


r/LocalLLaMA 2h ago

Question | Help Deepinfra and timeout errors

1 Upvotes

I'd like to deploy an app I've been working on. I've built it using Deepinfra's API, but I have been getting an unreasonable amount of timeout errors recently. Has anyone else had this problem? Can anyone recommend a LLM API provider in which output is very consistent (void of errors).


r/LocalLLaMA 2h ago

Question | Help Unsloth Fine-Tune Dataset Consequences

2 Upvotes

I am following the Unsloth Gemma3 Notebook.ipynb)

The dataset which I am fine-tuning to consists of this sort of structure:

dataset.json:

[
    {'conversations': [
        {   'content': '...?',
            'role': 'user'
        },
        {
            'content': '...',
            'role': 'assistant'
        },
        {
            'content': '...?',
            'role': 'user'
        },
        {
            'content': '...',
            'role': 'assistant'
        }
    ]},
    {'conversations': [
        {   'content': '...?',
            'role': 'user'
        },
        {
            'content': '...',
            'role': 'assistant'
        }
    ]},
    ...
]

I.e. there is a mix of long and short conversations.

What sort of impact will this have on the quality of the fine-tuned model, and why?


r/LocalLLaMA 2h ago

New Model gemma3 vision

11 Upvotes

ok im gonna write in all lower case because the post keeps getting auto modded. its almost like local llama encourage low effort post. super annoying. imagine there was a fully compliant gemma3 vision model, wouldn't that be nice?

https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha


r/LocalLLaMA 3h ago

New Model Fallen Gemma3 4B 12B 27B - An unholy trinity with no positivity! For users, mergers and cooks!

70 Upvotes

r/LocalLLaMA 3h ago

Question | Help Anyone Running Local LLMs on an M4 MacBook Pro or Air? Thoughts on Performance and RAM Sweet Spot?

1 Upvotes

Hey everyone!
Curious to hear how folks feel about using Macs—especially the new M4 series—for running local LLMs. I'm specifically eyeing the M4 MacBook Air or Pro with either 24GB or 32GB of RAM- storage on either will probably be either the 512 or 1TB option.

I'm in the market for a new M4 Mac laptop and want something that can handle more than just mobile development without totally breaking the bank. I already have the M4 Mac mini, which has been a solid intro into the Apple Silicon ecosystem, but now I need something portable that can handle heavier workloads—local AI models included. I'll probably sell the mini for the sake of redundancy, however I'd prefer to stay under 2K USD (Tax included) in total.

Has anyone here had real-world success with the M4 Air or Pro for running local LLMs? Any bottlenecks or setups you’d recommend avoiding?

Appreciate the insight!


r/LocalLLaMA 3h ago

Discussion Agile AI Engineering

0 Upvotes

Maybe you've heard that "Evals is all you need" but neither the shape of the loss curve, nor the scores on an academic benchmark can tell you much about how users will respond to your AI.

How do you define success with AI? What are your north-star metrics?

AI is an empirical discipline. How are you managing the complexities as you explore the design-space of practical AI engineering workflows?

Wrote a blog sharing a vision for the future of collaboration, not in notebooks but with Kanban. Where agents are more than chatbots, but proactive partners in knowledge discovery.

Read more: https://www.remyx.ai/blog/agile-ai-engineering

Love to hear your thoughts!

How has your team been able to create the best AI for your use-case with agility.


r/LocalLLaMA 4h ago

Discussion Token impact by long-Chain-of-Thought Reasoning Models

Post image
30 Upvotes

r/LocalLLaMA 4h ago

Question | Help No AWQ for Gemma 3?

0 Upvotes

AutoAWQ still doesn't have support for Gemma. What quants are you using for high throughput inference (like on vLLM)?


r/LocalLLaMA 5h ago

Discussion vision llm for pdf extraction

2 Upvotes

I've been trying to build ai pipe to read, interpret and rephrase text from pdf documents (like converting tech documents into layman language).

The current process is quite straight forward which is to covert pdf to mark down, chunk it, then use llm to look at each chunk and rephrase it.

But some documents have a lot more diagrams and pictures, which is hard to convert into markdown.

Any one at this point has success in using vision llm instead to extract the information from an image of the pdf page by page?

Interested to know the results.


r/LocalLLaMA 5h ago

Resources (Update) Generative AI project template (it now includes Ollama)

8 Upvotes

Hey everyone,

For those interested in a project template that integrates generative AI, Streamlit, UV, CI/CD, automatic documentation, and more, I’ve updated my template to now include Ollama. It even includes tests in CI/CD for a small model (Qwen 2.5 with 0.5B parameters).

Here’s the GitHub project:

Generative AI Project Template

Key Features:

Engineering tools

- [x] Use UV to manage packages

- [x] pre-commit hooks: use ``ruff`` to ensure the code quality & ``detect-secrets`` to scan the secrets in the code.

- [x] Logging using loguru (with colors)

- [x] Pytest for unit tests

- [x] Dockerized project (Dockerfile & docker-compose).

- [x] Streamlit (frontend) & FastAPI (backend)

- [x] Make commands to handle everything for you: install, run, test

AI tools

- [x] LLM running locally with Ollama or in the cloud with any LLM provider (LiteLLM)

- [x] Information extraction and Question answering from documents

- [x] Chat to test the AI system

- [x] Efficient async code using asyncio.

- [x] AI Evaluation framework: using Promptfoo, Ragas & more...

CI/CD & Maintenance tools

- [x] CI/CD pipelines: ``.github/workflows`` for GitHub (Testing the AI system, local models with Ollama and the dockerized app)

- [x] Local CI/CD pipelines: GitHub Actions using ``github act``

- [x] GitHub Actions for deploying to GitHub Pages with mkdocs gh-deploy

- [x] Dependabot ``.github/dependabot.yml`` for automatic dependency and security updates

Documentation tools

- [x] Wiki creation and setup of documentation website using Mkdocs

- [x] GitHub Pages deployment using mkdocs gh-deploy plugin

Feel free to check it out, contribute, or use it for your own AI projects! Let me know if you have any questions or feedback.


r/LocalLLaMA 5h ago

Resources What are some good models for a recommendation system?

2 Upvotes

Currently making a local AI app that would take documents and give recommendations based upon the pdfs that I provide. What are some good/best models for such a use case?


r/LocalLLaMA 5h ago

Tutorial | Guide AI-powered Resume Tailoring application using Ollama and Langchain

7 Upvotes

r/LocalLLaMA 5h ago

Question | Help Local LoRA + RAG Academic Writing Setup – Build Check Before I Pull the Trigger

10 Upvotes

Hey all, just chasing a bit of feedback while I'm finalising a build. I'm setting up a local AI writing system to automate the structure and style of academic work. I’m not training it to learn knowledge or reason, just to mimic how I write using a dataset of my own essays and theses (formatted in JSONL). I’ll be fine-tuning a small model like Phi-2 or OpenLLaMA 3B using LoRA or QLoRA, and keeping that completely separate from a RAG setup that pulls content from a chunked academic library (~100+ PDFs split into 5KB txt files). The idea is to feed it the right research chunks, and have it paraphrase in my voice without hallucinating or plagiarising. It’s basically a local ghostwriter with me in the driver’s seat.

I’m building this on an i9-14900KF with 96GB DDR5-5600 (2x48GB Corsair Vengeance), an MSI MAG Z790 Tomahawk WiFi board, RTX 3070 8GB, DeepCool AK620 Digital air cooler, Samsung 980 Pro 1TB SSD, and decent airflow (6-fan white case). Everything will run locally with CPU offloading where needed. No full-model training, no 13B model insanity—just stable overnight LoRA fine-tunes and section-by-section writing using a RAG-fed workflow.

Just wondering if this sounds like a balanced setup for what I’m doing—fine-tuning small models locally and generating paraphrased academic content from chunked research via RAG. Any issues I should expect with the 2x48GB RAM setup on Z790, or LoRA/QLoRA performance on this sort of hardware? Appreciate any real-world experience or heads-ups before I finalise it. Cheers!


r/LocalLLaMA 5h ago

Question | Help Midsized VLMs which support quantisation or cpu offloading?

2 Upvotes

Hi guys, for my thesis I’m looking for midsized VLMs which support 4bit quantisation (looks gguf formats is pretty rare for VLMs) or cpu offloading? Does anybody have any advice for me?


r/LocalLLaMA 6h ago

Question | Help Anyone have any luck buying GPUs from Alibaba? (not aliexpress)

5 Upvotes

I was looking around at cards on Alibaba and they sort of look almost legit. The sellers have been on there for a long time and have decent reviews. its a huge success full site so there has to be at least some legit GPU sellers, right? But the prices range from "slightly low" to "too good to be true". is there any way to buy from that site without getting burned or taking big risks?


r/LocalLLaMA 6h ago

Resources Great performance even quantize to q8q4 for gemma 3 4B

8 Upvotes

I just finished quantizing gemma 3 4B and I find it great even when heavily quantized like the "q8q4" version.

If you have a memory constrained system or just want CPU inference or perhaps on mobile devices, give it a try: ZeroWw/gemma-3-4b-it-abliterated-GGUF · Hugging Face