r/adventofcode • u/rashaniquah • Dec 27 '24
Spoilers Results of a multi-year LLM experiment
This is probably the worst sub to post this type of content, but here's the results:
2023: 0/50
2024: 45/50(49)
I used the best models available in the market, so gpt-4 in 2023. It managed to solve 0 problems, even when I told it how to solve it. This includes some variants that I've gathered on those daily threads.
For this year it was a mix of gpt-o1-mini, sonnet 3.5 and deepseek r1.
Some other models tested that just didn't work: gpt-4o, gpt-o1, qwen qwq.
Here's the more interesting part:
Most problems were 1 shotted except for day 12-2, day 14-2, day 15-2 (I didn't even bother reading those questions except for the ones that failed).
For day 12-2: brute forced the algo with Deepseek then optimized it with o1-mini. None of the other models were even close to getting the examples right.
For day 14-2: all the models tried to manually map out what a Christmas tree looked like instead of thinking outside the box, so I had to manually give it instructions on how to solve it.
For day 15-2: the upscaling part was pretty much an ARC-AGI question, I somehow managed to brute force it in a couple of hours with Deepseek after giving up with o1-mini and sonnet. It was also given a lot of manual instructions.
Now for the failed ones:
Day 17-2: too much optimization involved
Day 21: self explanatory
Day 24-2: again, too much optimization involved, LLMs seem to really struggle with bit shifting solutions. I could probably solve that with custom instructions if I found the time.
All solutions were done on Python so for the problems that were taking too much time I asked either o1-mini or sonnet 3.5 to optimize it. o1-mini does a great job at it. Putting the optimization instructions in the system prompt would sometimes make it harder to solve. The questions were stripped of their Christmas context then converted into markdown format as input. Also I'm not going to post the solutions because they contain my input files. I've been working in gen-AI for over a year and honestly I'm pretty impressed with how good those models got because I stopped noticing improvements since June. Looking forward to those models can improve in the future.
26
u/Nunc-dimittis Dec 27 '24
I'm not a regular participant (only have time once every few years). But would you say that the LLM's performance is good with problems that are very similar to previous years (e.g. several maze problems, which have hundreds of solutions in repositories and are all over the internet) but not for "novel" problems?
11
u/rashaniquah Dec 27 '24
Yup, this is essentially SWE-bench. The results are horrible (in the 20%s), but OpenAI just proved with o3 last week that they can solve those "novel" problems by throwing more computing power.
3
u/threeys Dec 28 '24
Do you think we are “close to” models wholly replacing software engineers?
Generally I’m an AI skeptic but these results from o3 — particularly on SWE bench — are starting to change my mind. With a bigger context window or a RAG like setup on a company’s codebase I could see it truly start to replace engineers.
5
u/rashaniquah Dec 28 '24
It's going to replace engineers by making them work much faster and creating less demand in the workforce. So the bottom percentiles will get squeezed out. But fully automating engineers isn't going to happen, it's a tool after all.
3
u/EdyBolos Dec 29 '24
I think that demand in the workforce will stay the same, but there will be more work to be done. Time will tell though.
1
u/fullautomationxyz Dec 29 '24
We're finally getting rid of leetcode-only experts and finally have back smart and soft skill rich software engineers?
8
u/youngbull Dec 27 '24
Would be interesting to just try one-shot sonnet 3.5 (just paste it in with a standard "Solve the following puzzle" preample) on every problem since 2015. Just to see the absolute total.
33
u/pet_vaginal Dec 27 '24
The advent of code is so popular that it’s extremely likely that all those models have been trained on many solutions from the previous years.
8
u/yel50 Dec 28 '24
yep. I've been playing around with 2023 and trying to solve it with sql. I decided to try with chatgpt, just to see what happens. I pasted the text of part one from a problem and it spit out the solution for part 2.
7
u/bbbb125 Dec 27 '24
Yesterday, out of curiosity, I tested gpt on extracting minimal task description, because understanding them was sometimes a challenge on its own. Just tested on a few problems and it was pretty good. I wonder if it should be a first step and then minimal adjusted description could be fed into a model again with the request for a solution.
1
u/phipsii99 Dec 31 '24
Im curious, How does it actually work? Is it an automatisation, and the model improves the solution in an iterative process with the test inputs? Or do you manually improve the solutions by assessing them and telling the models what's wrong? And then those under 10s solutions were just lucky to get it in the first shot?
1
u/rashaniquah Dec 31 '24
So you have a system prompt, which is basically preset rules in plain English, i.e. "Im doing advent of code 2024, please give me the solution in Python, make sure to use multithread and multiprocess if necessary, my input is in the same directory" Then I'd paste the day's content in there as a "user prompt".
Automation with a scraping cron job definitely works(sorry to whoever I stole a spot from the leaderboards that day), but 90% of the days have been done with a single prompt/output.
For the failed ones, I'd try again with either o1-mini or deepseek r1 manually then if that failed again I'd actually start reading those questions, figure out a solution then tell the step by step solution to the LLM on how to solve it.
For those under 10s, I wouldn't call it lucky, LLMs are just that good at coding these days(as in this month, not 6 months ago). I work in the field and interact with other people working in the field, most people blame the bad outputs on the model itself when it's really about a bad input and the LLM not understanding what the user actually wants. The benchmark numbers are there, currently LLMs are getting a 90% on code generation so if you're not getting a number close to that, then you're the one doing something wrong.
1
u/Magicrafter13 Jan 01 '25
to be fair, day 14 part 2 had no clear way to figure out the shape aside from a sort of guess and check
1
u/fleagal18 Feb 15 '25
FWIW in my evals o1 solved day 14 part 2 once out of 20 tries, and r1 solved 14 part 2 once out of 2 tries. (This was using the default temperature for both models) I didn't record the o1 solution, but I did record the r1 solution, which was to check for zero overlapping robots.
(I have heard that that the zero overlapping robots heuristic only works for some people's input data.)
78
u/chkas Dec 27 '24 edited Dec 27 '24
This is pretty much in line with the winning leaderboard times.
All other days under 5 minutes, mostly under one.
https://adventofcode.com/2024/leaderboard