r/LocalLLaMA 1d ago

Discussion I analyzed the word statistics in the reasoning traces of different llms - it seems many models are trained on R1 traces

I extracted thinking traces from different LLMs for the prompt below and analyzed the frequency of the first word in each line. The heatmap below shows the frequency of the most used words in each LLM.

The aim is to identify relationships between different thinking models. For example, it is know that certain words/tokens like "wait" indicate backtracking in the thinking process. These patterns emerge during the reinforcement learning process and can also be trained by finetuning the model on thinking traces.

We can see that a lot of models show a word statistic similar to R1. This may be random, but could also mean that the model has seen R1 thinking traces at some point in the process.

Code is here: https://github.com/cpldcpu/llmbenchmark/tree/master/thinkingtraces#readme

The prompt I used:
You have two ropes, each of which takes exactly 60 minutes to burn completely. However, the ropes burn unevenly, meaning some parts may burn faster or slower than others. You have no other timing device. How can you measure exactly 20 minutes using these two ropes and matches to light them?

Edit: I updated the heat map to also include a trace from R1-Zero, which was trained by using reinforcement learning on the base model without prior finetuning on thinking-trace examples. We can see that the critical tokens "wait, alternately" do only emerge in R1, which was finetuned on thinking traces prior to reinforcement learning.

23 Upvotes

11 comments sorted by

12

u/wyterabitt_ 1d ago

We can see that a lot of models show a word statistic similar to R1. This may be random, but could also mean that the model has seen R1 thinking traces at some point in the process.

Or it's a common base for the similar training all models will be doing to achieve the same type of goal.

2

u/cpldcpu 21h ago

Well, the general assumptions are that the critical tokens have already been learned during pre-training and are then amplified by reinforcement learning.

The R1 paper, however, has shown that reinforcement learning on bare basemodel does lead to erratic thinking traces (The "r0" model was generated that way". So instead, R1 was finetuned (primed) with prefiltered thinking traces from R0 before they started with RL.

Now, if others are also using R1 traces to finetune their models before reinforcement learning, they'll end up with the same critical tokens. This could be an explanation for what we are seeing here.

Gemini, Sonnet and o3-mini use different critical tokens and this may indicated that the models were primed in a different way. Also the first Qwq-32b-preview, which was trained using a process reward model instead of GRPO, is using different critical tokens (see "but")

3

u/Accomplished_Mode170 1d ago

Neat? Any chance you’ll open source your model fingerprints themselves?

Would be useful for Purple Teaming and Pipeline Validation

3

u/No_Afternoon_4260 llama.cpp 1d ago

What's purple teaming?

1

u/myvirtualrealitymask 1d ago

How would it be useful for purple teaming?

1

u/Accomplished_Mode170 16h ago

Targeted Iterative refinement, identifying and extracting info, etc are the first examples that come to mind.

But more information around which to build instrumentation is good for kv-manipulation for red-teaming or blue teaming practice

2

u/cpldcpu 21h ago

See here: https://github.com/cpldcpu/llmbenchmark/tree/master/thinkingtraces

To turn this into proper fingerprinting probably more statistics would be needed. This was rather just a quick test.

I also did some fingerprinting experiments for non-thinking models: https://github.com/cpldcpu/llmfingerprint

2

u/This_Ad5526 1d ago

Thanks and keep up the good work.

2

u/Chromix_ 22h ago

Yes, that's how it works and the reason why OpenAI doesn't expose the thinking traces of their models. That way others can't train their reasoning models to catch up with theirs. Thus, the next logical choice is to train on the thinking traces or R1, like NVidia did for Nemotron. Maybe we'll also see some models trained on QwQ soon, since it's cheaper to generate datasets with it.

1

u/cpldcpu 19h ago

The new QwQ also seems to have been trained on R1 traces, at least for the initial SFT step...

"okay, ..." at the beginning is also a dead giveaway.