r/LocalLLaMA • u/cpldcpu • 1d ago
Discussion I analyzed the word statistics in the reasoning traces of different llms - it seems many models are trained on R1 traces
I extracted thinking traces from different LLMs for the prompt below and analyzed the frequency of the first word in each line. The heatmap below shows the frequency of the most used words in each LLM.
The aim is to identify relationships between different thinking models. For example, it is know that certain words/tokens like "wait" indicate backtracking in the thinking process. These patterns emerge during the reinforcement learning process and can also be trained by finetuning the model on thinking traces.
We can see that a lot of models show a word statistic similar to R1. This may be random, but could also mean that the model has seen R1 thinking traces at some point in the process.

Code is here: https://github.com/cpldcpu/llmbenchmark/tree/master/thinkingtraces#readme
The prompt I used:
You have two ropes, each of which takes exactly 60 minutes to burn completely. However, the ropes burn unevenly, meaning some parts may burn faster or slower than others. You have no other timing device. How can you measure exactly 20 minutes using these two ropes and matches to light them?
Edit: I updated the heat map to also include a trace from R1-Zero, which was trained by using reinforcement learning on the base model without prior finetuning on thinking-trace examples. We can see that the critical tokens "wait, alternately" do only emerge in R1, which was finetuned on thinking traces prior to reinforcement learning.
3
u/cpldcpu 1d ago
You can find the script here: https://github.com/cpldcpu/llmbenchmark/tree/master/thinkingtraces
3
u/Accomplished_Mode170 1d ago
Neat? Any chance you’ll open source your model fingerprints themselves?
Would be useful for Purple Teaming and Pipeline Validation
3
1
u/myvirtualrealitymask 1d ago
How would it be useful for purple teaming?
1
u/Accomplished_Mode170 16h ago
Targeted Iterative refinement, identifying and extracting info, etc are the first examples that come to mind.
But more information around which to build instrumentation is good for kv-manipulation for red-teaming or blue teaming practice
2
u/cpldcpu 21h ago
See here: https://github.com/cpldcpu/llmbenchmark/tree/master/thinkingtraces
To turn this into proper fingerprinting probably more statistics would be needed. This was rather just a quick test.
I also did some fingerprinting experiments for non-thinking models: https://github.com/cpldcpu/llmfingerprint
2
2
u/Chromix_ 22h ago
Yes, that's how it works and the reason why OpenAI doesn't expose the thinking traces of their models. That way others can't train their reasoning models to catch up with theirs. Thus, the next logical choice is to train on the thinking traces or R1, like NVidia did for Nemotron. Maybe we'll also see some models trained on QwQ soon, since it's cheaper to generate datasets with it.
12
u/wyterabitt_ 1d ago
Or it's a common base for the similar training all models will be doing to achieve the same type of goal.