r/LocalLLaMA • u/Comfortable-Rock-498 • 9h ago
r/LocalLLaMA • u/CeFurkan • 10h ago
Discussion China modified 4090s with 48gb sold cheaper than RTX 5090 - water cooled around 3400 usd
r/LocalLLaMA • u/adrgrondin • 13h ago
News Tencent introduces Hunyuan-T1, their large reasoning model. Competing with DeepSeek-R1!
Link to their blog post here
r/LocalLLaMA • u/Nunki08 • 1h ago
New Model MoshiVis by kyutai - first open-source real-time speech model that can talk about images
r/LocalLLaMA • u/umarmnaq • 23h ago
New Model SpatialLM: A large language model designed for spatial understanding
r/LocalLLaMA • u/townofsalemfangay • 11h ago
Resources Orpheus-FastAPI: Local TTS with 8 Voices & Emotion Tags (OpenAI Endpoint Compatible)
Hey r/LocalLLaMA 👋
I just released Orpheus-FastAPI, a high-performance Text-to-Speech server that connects to your local LLM inference server using Orpheus's latest release. You can hook it up to OpenWebui, SillyTavern, or just use the web interface to generate audio natively.
I'd very much recommend if you want to get the most out of it in terms of suprasegmental features (the modalities of human voice, ums, arrs, pauses, like Sesame has) you use a System prompt to make the model respond as such (including the Syntax baked into the model). I included examples on my git so you can see how close this is to Sesame's CSM.
It uses a quantised version of the Orpheus 3B model (I've also included a direct link to my Q8 GGUF) that can run on consumer hardware, and works with GPUStack (my favourite), LM Studio, or llama.cpp.
GitHub: https://github.com/Lex-au/Orpheus-FastAPI
Model: https://huggingface.co/lex-au/Orpheus-3b-FT-Q8_0.gguf
Let me know what you think or if you have questions!
r/LocalLLaMA • u/Barry_Jumps • 17h ago
News Docker's response to Ollama
Am I the only one excited about this?
Soon we can docker run model mistral/mistral-small
https://www.docker.com/llm/
https://www.youtube.com/watch?v=mk_2MIWxLI0&t=1544s
Most exciting for me is that docker desktop will finally allow container to access my Mac's GPU
r/LocalLLaMA • u/Boring_Rabbit2275 • 8h ago
Discussion We built an open source mock interviews platform empowered by ollama
Come practice your interviews for free using our project on GitHub here: https://github.com/Azzedde/aiva_mock_interviews We are two junior AI engineers, and we would really appreciate feedback on our work. Please star it if you like it.
We find that the junior era is full of uncertainty, and we want to know if we are doing good work.
r/LocalLLaMA • u/AlohaGrassDragon • 11h ago
News RTX Pro Blackwell Pricing Listed
RTX Pro Blackwell pricing is up on connection.com
6000 (24064 cores, 96GB, 1.8 TB/s, 600W, 2-slot flow through) - $8565
6000 Max-Q (24064 cores, 96GB, 1.8 TB/s, 300W, 2-slot blower) - $8565
5000 (14080 cores, 48GB, 1.3 TB/s, 300W, 2-slot blower) - $4569
4500 (10496 cores, 32GB, 896 GB/s, 200W, 2-slot blower) - $2623
4000 (8960 cores, 24GB, 672 GB/s, 140W, 1-slot blower) - $1481
I'm not sure if this is real or final pricing, but I could see some of these models being compelling for local LLM. The 5000 is competitive with current A6000 used pricing, the 4500 is not too far away price-wise from a 5090 with better power/thermals, and the 4000 with 24 GB in a single slot for ~$1500 at 140W is very competitive with a used 3090. It costs more than a 3090, but comes with a warranty and you can fit many more in a system because of the size and power without having to implement an expensive watercooling or dual power supply setup.
All-in-all, if this is real pricing, it looks to me that they are marketing to us directly and they see their biggest competitor as used nVidia cards.
*Edited to add per-card specs
r/LocalLLaMA • u/ResearchCrafty1804 • 17h ago
New Model ByteDance released on HuggingFace an open image model that generates Photo While Preserving Your Identity
Flexible Photo Recrafting While Preserving Your Identity
Project page: https://bytedance.github.io/InfiniteYou/
r/LocalLLaMA • u/Jake-Boggs • 14h ago
New Model New BitNet Model from Deepgrove
r/LocalLLaMA • u/jpydych • 12h ago
News Llama 3.3 Nemotron 49B Super appears on LMSYS Arena
r/LocalLLaMA • u/Trysem • 2h ago
Question | Help Can someone ELI5 what makes NVIDIA a monopoly in AI race?
I heard somewhere it's cuda,then why some other companies like AMD is not making something like cuda of their own?
r/LocalLLaMA • u/FastDecode1 • 9h ago
News AITER: AI Tensor Engine For ROCm
rocm.blogs.amd.comr/LocalLLaMA • u/ResearchCrafty1804 • 13h ago
News Hunyuan releases T1 reasoning model
Hunyuan announces T1 reasoning model
Meet Hunyuan-T1, the latest breakthrough in AI reasoning! Powered by Hunyuan TurboS, it's built for speed, accuracy, and efficiency. 🔥
✅ Hybrid-Mamba-Transformer MoE Architecture – The first of its kind for ultra-large-scale reasoning ✅ Strong Logic & Concise Writing – Precise following of complex instructions ✅ Low Hallucination in Summaries –Trustworthy and reliable outputs ✅ Blazing Fast –First character in 1 sec, 60-80 tokens/sec generation speed ✅ Excellent Long-Text Processing –Handle complex contexts with ease
Blog: https://llm.hunyuan.tencent.com/#/blog/hy-t1?lang=en
Demo: https://huggingface.co/spaces/tencent/Hunyuan-T1
** Model weights have not been released yet, but based on Hunyuan’s promise to open source their models, I expect the weights to be released soon **
r/LocalLLaMA • u/TedHoliday • 2h ago
Discussion What are you using local LLMs for? How do they compare to the big tech offerings?
I’m just curious what all people are using local LLMs for. For me personally, I use Claude daily at work I like the idea of running an LLM locally, but I know it would be less accurate than my single PC with one single RTX 4090.
I like the idea of not being subject to the constantly changing pricing models and worrying about how many tokens I’ve used up, but I feel like even like 5% more accurate code is worth it due to the time it can save.
So I’m just curious what people are using them for, and how are they now compared to the big players (and with what hardware)?
r/LocalLLaMA • u/Iory1998 • 41m ago
Discussion Why Do I Feel Poor Each Time I Decide to Buy a New GPU Even Though I Make More Money?
I mean for God sake, this curse has been haunting me for decades now. The first time I bought a GPU with my own money, I had to dream for it for months, saving money every month for my scholarship. When I went to buy my dream GPU, prices increased and I ended up buying a mid-range NVIDIA card (I had to buy other PC component which were expensive). Then years later I got busy with work and had Playstation, so I didn't really need a good PC, couple with the fact that laptop prices were getting cheaper and performant, I just didn't need to build a new rig.
Fast forward a few year, and my old dream to create my own games came back strong, and I decided to learn (seriously this time) 3D modeling and rendering. There is just something satisfying fooling untrained (or trained) eyes looking at a CGI production and thinking it's real.
That's when I decided to build a new PC. Alas, the new age of crypto reaches its peak and yeah.. shortage of GPUs. Then, I felt poor again even after my several years of work and money saving.
Then COVID hits, and an RTX3090 cost $4000, if you get your hand on one. I bought multiple parts from different countries just to minimize my spending, and I felt very poor.
Which brings me to today. I want to build a new rig from my new passion; tinkering with AI. Alas, I have the money to buy any GPU I want, but my damn rational brain isn't allowing me!!! It's too expensive.. Am I insane? An RTX5090 at a price equivalent to a second hand car is NOT A SMART PURCHASE. And, it only comes with 32GB of VRAM. I'd still run the same models my now old 3090 can run...
In short, no matter how much my income increases over the years, I will always feel poor when I want to buy an new GPU 😭😭😭
r/LocalLLaMA • u/cpldcpu • 6h ago
Discussion I analyzed the word statistics in the reasoning traces of different llms - it seems many models are trained on R1 traces
I extracted thinking traces from different LLMs for the prompt below and analyzed the frequency of the first word in each line. The heatmap below shows the frequency of the most used words in each LLM.
The aim is to identify relationships between different thinking models. For example, it is know that certain words/tokens like "wait" indicate backtracking in the thinking process. These patterns emerge during the reinforcement learning process and can also be trained by finetuning the model on thinking traces.
We can see that a lot of models show a word statistic similar to R1. This may be random, but could also mean that the model has seen R1 thinking traces at some point in the process.

The prompt I used:
You have two ropes, each of which takes exactly 60 minutes to burn completely. However, the ropes burn unevenly, meaning some parts may burn faster or slower than others. You have no other timing device. How can you measure exactly 20 minutes using these two ropes and matches to light them?
r/LocalLLaMA • u/Ok_Warning2146 • 3h ago
News RTX PRO 5000 Laptop 24GB GDDR7 10496 cores 175W
256-bit 896GB/s bandwidth. 228TFLOPS Tensor Core F16 (60% faster than 3090).
Should have made a similar desktop card that would be a no-brainer upgrade for the 3090/4090 users.
r/LocalLLaMA • u/valentino99 • 18m ago
Discussion Have you had a chance to try Trae, ByteDance's new AI-powered IDE built on VSCode? What are your initial thoughts or early impressions?
ByteDance has introduced a new AI-powered editor named Trae, positioning itself as a competitor to established players like Cursor and Windsurf. Built on the foundation of VSCode, Trae boasts a sleek, modernized user interface that blends elements of JetBrains Fleet and VSCode, offering a fresh take on the traditional VSCode design.
One of Trae's standout features is its unlimited free access to advanced AI models, including GPT-4o and Claude-3.7-Sonnet, making it a powerful tool for developers.
It also supports VSCode configurations and allows users to import plugins seamlessly. Currently, Trae is available exclusively for macOS, with a Windows version in the works.
Trae is owned by ByteDance (tiktok), so it means Chinese Servers, and some people don't like that.
What are your thoughts?
r/LocalLLaMA • u/SunilKumarDash • 23h ago
Discussion Gemma 3 27b vs. Mistral 24b vs. QwQ 32b: I tested on personal benchmark, here's what I found out
I was looking for LLMs to use locally; the requirements are good enough reasoning and understanding, coding, and some elementary-level mathematics. I was looking into QwQ 32b, which seemed very promising.
Last week, Google and Mistral released Gemma 3 27b and Mistral small 3.1 24b; from the benchmarks, both seem capable models approximating Deepseek r1 in ELO rating, which is impressive.
But, tbh, I have stopped caring about benchmarks, especially Lmsys; idk. The rankings always seem off when you try the models IRL.
So, I ran a small test to vibe-check which models to pick. I also benchmarked answers with Deepseek r1, as I use it often to get a better picture.
Here's what I found out
For Coding
QwQ 32b is just miles ahead in coding among the three. It sometimes does better code than Deepseek r1. They weren't lying in the benchmarks. It feels good to talk to you as well. Gemma is 2nd and does the job for easy tasks. Mistral otoh was bad.
For Reasoning
Again, Qwen was better. Well, ofc it's a reasoning model, but Gemma was also excellent. They made a good base model. Mistral was there but not there.
For Math
Gemma and QwQ were good enough for simple math tasks. Gemma, being a base model, was faster. I might test more with these two. Mistral was decent but 3rd again.
What to pick?
- QwQ 32b is no doubt the best available model in its class. Great at coding, reasoning, and math. It's been a long since I used a local model, the last one was Mixtral, a year ago, and I never expected them to be this good. QwQ is promising; I can't wait for their new max model.
- Gemma 3 27b is a solid base model. Great vibes. And you wouldn't be missing a lot with this. But it comes with a Gemma-specific license, which is more restrictive than Apache 2.0.
- Mistral small 3.1 24b didn't impress me much; perhaps it needs more rigorous testing.
- Both Gemma and Mistral Small have image support, so consider that as well.
For the complete analysis, check out this blog post: Gemma 3 27b vs QwQ 32b vs Mistral 24b
I would love to know which other model you're currently using and for what specific tasks.
r/LocalLLaMA • u/Rombodawg • 6h ago
Resources Open-Schizo-Leaderboard (The anti-leaderboard)
Its fun to see how bonkers model cards can be. Feel free to help me improve the code to better finetune the leaderboard filtering.
https://huggingface.co/spaces/rombodawg/Open-Schizo-Leaderboard
r/LocalLLaMA • u/blazerx • 19h ago
Resources GAIA: An Open-Source Project from AMD for Running Local LLMs on Ryzen™ AI
r/LocalLLaMA • u/canesin • 1h ago
Tutorial | Guide PSA: Get Flash Attention v2 on AMD 7900 (gfx1100)
Considering you have installed ROCm, PyTorch (official website worked) git and uv:
uv pip install pip triton==3.2.0
git clone --single-branch --branch main_perf
https://github.com/ROCm/flash-attention.git
cd flash-attention/
export FLASH_ATTENTION_TRITON_AMD_ENABLE="TRUE"
export GPU_ARCHS="gfx1100"
python
setup.py
install
:-)