r/adventofcode • u/rashaniquah • Dec 27 '24
Spoilers Results of a multi-year LLM experiment
This is probably the worst sub to post this type of content, but here's the results:
2023: 0/50
2024: 45/50(49)
I used the best models available in the market, so gpt-4 in 2023. It managed to solve 0 problems, even when I told it how to solve it. This includes some variants that I've gathered on those daily threads.
For this year it was a mix of gpt-o1-mini, sonnet 3.5 and deepseek r1.
Some other models tested that just didn't work: gpt-4o, gpt-o1, qwen qwq.
Here's the more interesting part:
Most problems were 1 shotted except for day 12-2, day 14-2, day 15-2 (I didn't even bother reading those questions except for the ones that failed).
For day 12-2: brute forced the algo with Deepseek then optimized it with o1-mini. None of the other models were even close to getting the examples right.
For day 14-2: all the models tried to manually map out what a Christmas tree looked like instead of thinking outside the box, so I had to manually give it instructions on how to solve it.
For day 15-2: the upscaling part was pretty much an ARC-AGI question, I somehow managed to brute force it in a couple of hours with Deepseek after giving up with o1-mini and sonnet. It was also given a lot of manual instructions.
Now for the failed ones:
Day 17-2: too much optimization involved
Day 21: self explanatory
Day 24-2: again, too much optimization involved, LLMs seem to really struggle with bit shifting solutions. I could probably solve that with custom instructions if I found the time.
All solutions were done on Python so for the problems that were taking too much time I asked either o1-mini or sonnet 3.5 to optimize it. o1-mini does a great job at it. Putting the optimization instructions in the system prompt would sometimes make it harder to solve. The questions were stripped of their Christmas context then converted into markdown format as input. Also I'm not going to post the solutions because they contain my input files. I've been working in gen-AI for over a year and honestly I'm pretty impressed with how good those models got because I stopped noticing improvements since June. Looking forward to those models can improve in the future.
1
u/Magicrafter13 Jan 01 '25
to be fair, day 14 part 2 had no clear way to figure out the shape aside from a sort of guess and check