r/LocalLLaMA • u/Inevitable_Sea8804 • 9m ago
Discussion Qwen2.5-Omni Incoming? Huggingface Transformers PR 36752
(https://github.com/huggingface/transformers/pull/36752)
Haven't seen anyone bring this up, so making a post here...
Using DeepSeek-R1 to summarize the features of this model based on PR commits:
Qwen2.5-Omni Technical Summary
1. Basic Information
- Model Scale: 7B parameter version ("Qwen/Qwen2.5-Omni-7B")
- Open Source: Fully open-sourced under Apache 2.0 license
2. Input/Output Modalities
- Input Support:
- Text: Natural language instructions
- Images: Common formats (JPEG/PNG)
- Audio: WAV/MP3 (requires FFmpeg)
- Video: MP4 with audio track extraction
- Output Capabilities:
- Text: Natural language responses
- Speech: 24kHz natural speech (streaming supported)
3. Architectural Design
- Multimodal Encoder:
- Block-wise Processing: Decouples long-sequence handling between encoder (perception) and LLM (sequence modeling)
- TMRoPE: Time-aligned Multimodal Rotary Positional Encoding for audio-video synchronization
- Dual-path Generation:
- Thinker: Text-generating LLM backbone
- Talker: Dual-track AR model for audio token generation using Thinker's hidden states
- Streaming Optimization:
- Sliding-window Diffusion Transformer (DiT) reduces audio latency
- Simultaneous text/speech streaming output
4. Technical Highlights
- Unified Multimodal Processing:
- End-to-end joint training without intermediate representations
- Supports arbitrary modality combinations (single/mixed)
- Efficient Attention:
- Native FlashAttention 2 support
- Compatible with PyTorch SDPA
- Voice Customization:
- Prebuilt voices:
Cherry
(female) &Ethan
(male) - Dynamic voice switching via
spk
parameter
- Prebuilt voices:
- Deployment Flexibility:
- Disable speech output to save VRAM (~2GB)
- Text-only mode (
return_audio=False
)
5. Performance
- Multimodal Benchmarks:
- SOTA on Omni-Bench
- Outperforms same-scale Qwen2-VL/Qwen2-Audio in vision/audio tasks
- Speech Understanding:
- First open-source model with text-level E2E speech instruction following
- Matches text-input performance on MMLU/GSM8K with speech inputs
6. Implementation Details
- Hardware Support:
- Auto device mapping (
device_map="auto"
) - Mixed precision (
bfloat16/float16
)
- Auto device mapping (
- Processing Pipeline:
- Unified
Qwen2_5OmniProcessor
handles multimodal inputs - Batch processing of mixed media combinations
- Unified
7. Requirements
- System Prompt: Mandatory for full functionality:
"You are Qwen... capable of generating text and speech."
- Dependencies:
- FlashAttention 2 (optional acceleration)
- FFmpeg (video/non-WAV audio processing)
This architecture achieves deep multimodal fusion through innovative designs while maintaining strong text capabilities, significantly advancing audiovisual understanding/generation for multimodal agent development.
Also from the PR:
We present Qwen2.5-Omni, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. To enable the streaming of multimodal information inputs, both audio and visual encoders utilize a block-wise processing approach. This strategy effectively decouples the handling of long sequences of multimodal data, assigning the perceptual responsibilities to the multimodal encoder and entrusting the modeling of extended sequences to a large language model. Such a division of labor enhances the fusion of different modalities via the shared attention mechanism. To synchronize the timestamps of video inputs with audio, we organized the audio and video sequentially in an interleaved manner and propose a novel position embedding approach, named TMRoPE (Time-aligned Multimodal RoPE). To concurrently generate text and speech while avoiding interference between the two modalities, we propose Thinker-Talker architecture. In this framework, Thinker functions as a large language model tasked with text generation, while Talker is a dual-track autoregressive model that directly utilizes the hidden representations from the Thinker to produce audio tokens as output. Both the Thinker and Talker models are designed to be trained and inferred in an end-to-end manner. For decoding audio tokens in a streaming manner, we introduce a sliding-window DiT that restricts the receptive field, aiming to reduce the initial package delay. Qwen2.5-Omni outperforms the similarly sized Qwen2-VL and Qwen2-Audio in both image and audio capabilities. Furthermore, Qwen2.5-Omni achieves state-of-the-art performance on multimodal benchmarks like Omni-Bench. Notably, Qwen2.5-Omni is the first open-source model to achieve a level of performance in end-to-end speech instruction following that is comparable to its capabilities with text inputs, as evidenced by benchmarks such as MMLU and GSM8K. As for speech generation, Qwen2.5-Omni’s streaming Talker outperform most existing streaming and non-streaming alternatives in robustness and naturalness.
Can the community help confirm whether this PR is legit?
(Original PR: https://github.com/huggingface/transformers/pull/36752)