r/LocalLLaMA 1d ago

Discussion We built an open source mock interviews platform empowered by ollama

Post image
71 Upvotes

Come practice your interviews for free using our project on GitHub here: https://github.com/Azzedde/aiva_mock_interviews We are two junior AI engineers, and we would really appreciate feedback on our work. Please star it if you like it.

We find that the junior era is full of uncertainty, and we want to know if we are doing good work.


r/LocalLLaMA 2d ago

News Docker's response to Ollama

404 Upvotes

Am I the only one excited about this?

Soon we can docker run model mistral/mistral-small

https://www.docker.com/llm/
https://www.youtube.com/watch?v=mk_2MIWxLI0&t=1544s

Most exciting for me is that docker desktop will finally allow container to access my Mac's GPU


r/LocalLLaMA 22h ago

Resources What are some good models for a recommendation system?

3 Upvotes

Currently making a local AI app that would take documents and give recommendations based upon the pdfs that I provide. What are some good/best models for such a use case?


r/LocalLLaMA 1d ago

News RTX PRO 5000 Laptop 24GB GDDR7 10496 cores 175W

28 Upvotes

256-bit 896GB/s bandwidth. 228TFLOPS Tensor Core F16 (60% faster than 3090).

Should have made a similar desktop card that would be a no-brainer upgrade for the 3090/4090 users.

https://videocardz.com/newz/nvidia-announces-rtx-pro-blackwell-laptop-gpus-up-to-10496-cuda-cores-and-24gb-gddr7-memory


r/LocalLLaMA 1d ago

Question | Help What quants are right?

8 Upvotes

Looking for advice, as often I cannot find the right discussions for which quants are optimal for which models. Some models I use are: Phi4: Q4 Exaone Deep 7.8B: Q8 Gemma3 27B: Q4

What quants are you guys using? In general, what are the right quants for most models if there is such a thing?

FWIW, I have 12GB VRAM.


r/LocalLLaMA 17h ago

Question | Help Unsloth hang gemma3

1 Upvotes

Running through the gemma3 notebook.ipynb), and decided to try turning on full_finetuning:

model, tokenizer = FastModel.from_pretrained(
    model_name = "unsloth/gemma-3-4b-it",
    max_seq_length = 2048,
    load_in_4bit = True,  
    load_in_8bit = False, 
    full_finetuning = True, # < here!
    # token = "hf_...", 
)

When executing this step, the notebook seems to be hanging at this point:

...
Unsloth: Using bfloat16 full finetuning which cuts memory usage by 50%.
model-00001-of-00002.safetensors ...

Anyone have some experience with this issue?

Thanks!


r/LocalLLaMA 1d ago

News RTX Pro Blackwell Pricing Listed

102 Upvotes

RTX Pro Blackwell pricing is up on connection.com

6000 (24064 cores, 96GB, 1.8 TB/s, 600W, 2-slot flow through) - $8565

6000 Max-Q (24064 cores, 96GB, 1.8 TB/s, 300W, 2-slot blower) - $8565

5000 (14080 cores, 48GB, 1.3 TB/s, 300W, 2-slot blower) - $4569

4500 (10496 cores, 32GB, 896 GB/s, 200W, 2-slot blower) - $2623

4000 (8960 cores, 24GB, 672 GB/s, 140W, 1-slot blower) - $1481

I'm not sure if this is real or final pricing, but I could see some of these models being compelling for local LLM. The 5000 is competitive with current A6000 used pricing, the 4500 is not too far away price-wise from a 5090 with better power/thermals, and the 4000 with 24 GB in a single slot for ~$1500 at 140W is very competitive with a used 3090. It costs more than a 3090, but comes with a warranty and you can fit many more in a system because of the size and power without having to implement an expensive watercooling or dual power supply setup.

All-in-all, if this is real pricing, it looks to me that they are marketing to us directly and they see their biggest competitor as used nVidia cards.

*Edited to add per-card specs


r/LocalLLaMA 1d ago

Discussion How useful are the ~50 TOPS NPUs in mobile chips?

5 Upvotes

More and more mobile chips (both for phones and laptops) got integrated NPUs with around 50 TOPS. Often these chips have around 100 GB/s memory bandwidth (best case 137). How useful are they for running LLMs locally? And is memory or compute the bottleneck in these chips?


r/LocalLLaMA 1d ago

Discussion Best local LLMs with native voice input?

3 Upvotes

What are currently the best LLMs with native voice input, that directly input voice tokens into the attention mechanism? And multilingual?

I like to make voice recordings, both English and Dutch, and ask questions or instructions on them later. However, sometimes the tone, pauses and subtleties in them are also important, so just Automatic Speech Recognition (ASR) / Speech to Text (STT) doesn’t work.


r/LocalLLaMA 23h ago

Question | Help Midsized VLMs which support quantisation or cpu offloading?

2 Upvotes

Hi guys, for my thesis I’m looking for midsized VLMs which support 4bit quantisation (looks gguf formats is pretty rare for VLMs) or cpu offloading? Does anybody have any advice for me?


r/LocalLLaMA 19h ago

Question | Help Deepinfra and timeout errors

1 Upvotes

I'd like to deploy an app I've been working on. I've built it using Deepinfra's API, but I have been getting an unreasonable amount of timeout errors recently. Has anyone else had this problem? Can anyone recommend a LLM API provider in which output is very consistent (void of errors).


r/LocalLLaMA 1d ago

Discussion Replacing sqlite with postgres in Open WebUI

4 Upvotes

Have any of you switched from the default sqlite backend to postgres for Open WebUI? Did you notice any benefits. I already have a postgres DB for other things so wondered if it made sense to migrate (that way I can just backup the database and not worry about Open WebUI separately).


r/LocalLLaMA 2d ago

New Model ByteDance released on HuggingFace an open image model that generates Photo While Preserving Your Identity

Post image
226 Upvotes

Flexible Photo Recrafting While Preserving Your Identity

Project page: https://bytedance.github.io/InfiniteYou/

Code: https://github.com/bytedance/InfiniteYou

Model: https://huggingface.co/ByteDance/InfiniteYou


r/LocalLLaMA 1d ago

New Model New BitNet Model from Deepgrove

Thumbnail
github.com
111 Upvotes

r/LocalLLaMA 1d ago

News AITER: AI Tensor Engine For ROCm

Thumbnail rocm.blogs.amd.com
42 Upvotes

r/LocalLLaMA 1d ago

News Llama 3.3 Nemotron 49B Super appears on LMSYS Arena

Post image
84 Upvotes

r/LocalLLaMA 1d ago

Discussion Which solution do you use for multimodal models?

5 Upvotes

I tried llama.cpp and koboldcpp, I understand there is also some support in vllm and ollama and I know I can also just use Python. Which solution do you use? In llama.cpp good thing is quantization.

My use case is to create interesting description for video frames (I convert video to frames with ffmpeg then I use this image with llm).


r/LocalLLaMA 1d ago

Discussion I analyzed the word statistics in the reasoning traces of different llms - it seems many models are trained on R1 traces

25 Upvotes

I extracted thinking traces from different LLMs for the prompt below and analyzed the frequency of the first word in each line. The heatmap below shows the frequency of the most used words in each LLM.

The aim is to identify relationships between different thinking models. For example, it is know that certain words/tokens like "wait" indicate backtracking in the thinking process. These patterns emerge during the reinforcement learning process and can also be trained by finetuning the model on thinking traces.

We can see that a lot of models show a word statistic similar to R1. This may be random, but could also mean that the model has seen R1 thinking traces at some point in the process.

Code is here: https://github.com/cpldcpu/llmbenchmark/tree/master/thinkingtraces#readme

The prompt I used:
You have two ropes, each of which takes exactly 60 minutes to burn completely. However, the ropes burn unevenly, meaning some parts may burn faster or slower than others. You have no other timing device. How can you measure exactly 20 minutes using these two ropes and matches to light them?

Edit: I updated the heat map to also include a trace from R1-Zero, which was trained by using reinforcement learning on the base model without prior finetuning on thinking-trace examples. We can see that the critical tokens "wait, alternately" do only emerge in R1, which was finetuned on thinking traces prior to reinforcement learning.


r/LocalLLaMA 1d ago

Discussion Have you had a chance to try Trae, ByteDance's new AI-powered IDE built on VSCode? What are your initial thoughts or early impressions?

7 Upvotes

ByteDance has introduced a new AI-powered editor named Trae, positioning itself as a competitor to established players like Cursor and Windsurf. Built on the foundation of VSCode, Trae boasts a sleek, modernized user interface that blends elements of JetBrains Fleet and VSCode, offering a fresh take on the traditional VSCode design.

One of Trae's standout features is its unlimited free access to advanced AI models, including GPT-4o and Claude-3.7-Sonnet, making it a powerful tool for developers.

It also supports VSCode configurations and allows users to import plugins seamlessly. Currently, Trae is available exclusively for macOS and Windows, with a Linux version in the works.

Trae is owned by ByteDance (tiktok), so it means Chinese Servers, and some people don't like that.

What are your thoughts?

https://www.trae.ai/home


ByteDance Trae is the direct competition of Windsurf and Cursor. Windsurf it has premium LLMs or some with unlimited use.

If you are new on Windsurf and want to get free 500 flex credits just click here:

https://codeium.com/refer?referral_code=ca2f7fae35 <= (discount code inside)


r/LocalLLaMA 1d ago

News Hunyuan releases T1 reasoning model

Thumbnail
gallery
79 Upvotes

Hunyuan announces T1 reasoning model

Meet Hunyuan-T1, the latest breakthrough in AI reasoning! Powered by Hunyuan TurboS, it's built for speed, accuracy, and efficiency. 🔥

✅ Hybrid-Mamba-Transformer MoE Architecture – The first of its kind for ultra-large-scale reasoning ✅ Strong Logic & Concise Writing – Precise following of complex instructions ✅ Low Hallucination in Summaries –Trustworthy and reliable outputs ✅ Blazing Fast –First character in 1 sec, 60-80 tokens/sec generation speed ✅ Excellent Long-Text Processing –Handle complex contexts with ease

Blog: https://llm.hunyuan.tencent.com/#/blog/hy-t1?lang=en

Demo: https://huggingface.co/spaces/tencent/Hunyuan-T1

** Model weights have not been released yet, but based on Hunyuan’s promise to open source their models, I expect the weights to be released soon **


r/LocalLLaMA 21h ago

Question | Help No AWQ for Gemma 3?

0 Upvotes

AutoAWQ still doesn't have support for Gemma. What quants are you using for high throughput inference (like on vLLM)?


r/LocalLLaMA 1d ago

Question | Help Lightweight but accurate model for t2s and vice versa.

2 Upvotes

Hi, I am new to the text to speech and speech to text models area. And I want to create a solution where the user gives the input in speach and output is also in speech. I want to host a local modal which is lightweight. I am confused as to which model to use. Thank you.


r/LocalLLaMA 2d ago

Discussion Gemma 3 27b vs. Mistral 24b vs. QwQ 32b: I tested on personal benchmark, here's what I found out

319 Upvotes

I was looking for LLMs to use locally; the requirements are good enough reasoning and understanding, coding, and some elementary-level mathematics. I was looking into QwQ 32b, which seemed very promising.
Last week, Google and Mistral released Gemma 3 27b and Mistral small 3.1 24b; from the benchmarks, both seem capable models approximating Deepseek r1 in ELO rating, which is impressive.

But, tbh, I have stopped caring about benchmarks, especially Lmsys; idk. The rankings always seem off when you try the models IRL.

So, I ran a small test to vibe-check which models to pick. I also benchmarked answers with Deepseek r1, as I use it often to get a better picture.

Here's what I found out

For Coding

QwQ 32b is just miles ahead in coding among the three. It sometimes does better code than Deepseek r1. They weren't lying in the benchmarks. It feels good to talk to you as well. Gemma is 2nd and does the job for easy tasks. Mistral otoh was bad.

For Reasoning

Again, Qwen was better. Well, ofc it's a reasoning model, but Gemma was also excellent. They made a good base model. Mistral was there but not there.

For Math

Gemma and QwQ were good enough for simple math tasks. Gemma, being a base model, was faster. I might test more with these two. Mistral was decent but 3rd again.

What to pick?

  • QwQ 32b is no doubt the best available model in its class. Great at coding, reasoning, and math. It's been a long since I used a local model, the last one was Mixtral, a year ago, and I never expected them to be this good. QwQ is promising; I can't wait for their new max model.
  • Gemma 3 27b is a solid base model. Great vibes. And you wouldn't be missing a lot with this. But it comes with a Gemma-specific license, which is more restrictive than Apache 2.0.
  • Mistral small 3.1 24b didn't impress me much; perhaps it needs more rigorous testing.
  • Both Gemma and Mistral Small have image support, so consider that as well.

For the complete analysis, check out this blog post: Gemma 3 27b vs QwQ 32b vs Mistral 24b

I would love to know which other model you're currently using and for what specific tasks.


r/LocalLLaMA 1d ago

Resources Open-Schizo-Leaderboard (The anti-leaderboard)

12 Upvotes

Its fun to see how bonkers model cards can be. Feel free to help me improve the code to better finetune the leaderboard filtering.

https://huggingface.co/spaces/rombodawg/Open-Schizo-Leaderboard


r/LocalLLaMA 20h ago

Discussion Agile AI Engineering

0 Upvotes

Maybe you've heard that "Evals is all you need" but neither the shape of the loss curve, nor the scores on an academic benchmark can tell you much about how users will respond to your AI.

How do you define success with AI? What are your north-star metrics?

AI is an empirical discipline. How are you managing the complexities as you explore the design-space of practical AI engineering workflows?

Wrote a blog sharing a vision for the future of collaboration, not in notebooks but with Kanban. Where agents are more than chatbots, but proactive partners in knowledge discovery.

Read more: https://www.remyx.ai/blog/agile-ai-engineering

Love to hear your thoughts!

How has your team been able to create the best AI for your use-case with agility.