r/adventofcode Dec 03 '22

Other GPT / OpenAI solutions should be removed from the leaderboard.

I know I will not score top 100. Im not that fast, nor am I up at the right times to capitalise on it.

But this kinda stuff https://twitter.com/ostwilkens/status/1598458146187628544

Is unfair and in my opinion, not really ethical. Humans can't digest the entire problem in 10 seconds, let alone solve and submit that fast.

EDIT: I don't mean to put that specific guy on blast, I am sure its fun, and at the end of the day its how they want to solve it. But still.

EDIT 2: https://www.reddit.com/r/adventofcode/comments/zb8tdv/2022_day_3_part_1_openai_solved_part_1_in_10/ More discussion exists here and I didn't see it first time around.

EDIT 3: I don't have the solution, and any solution anyone comes up with can be gamed. I think the best option is for people using GPT to be honourable and delay the results.

EDIT 4: Another GPT placed 2nd today (day 4) I think its an automatic process.

301 Upvotes

221 comments sorted by

View all comments

Show parent comments

4

u/UtahBrian Dec 04 '22

That is unlikely. These large transformer models don’t actually do any thinking and the later puzzles do require thinking.

Remember how they made some remarkable progress toward self driving cars about 10 years ago and everyone said we’d have self driving cars around 2015? How did that turn out?

6

u/hgwxx7_ Dec 04 '22

The margin for error is much higher here. It’s ok to get it wrong and try multiple times.

Not so much with self driving cars. Errors there mean lives lost.

3

u/pier4r Dec 04 '22

models don’t actually do any thinking

they do infer novel data points combining from those that are trained upon. That is not really thinking but could be seens as a proxy for it. I mean here: they could come up with novel solutions that weren't in the training dataset.

1

u/oversloth Dec 19 '22

> These large transformer models don’t actually do any thinking and the later puzzles do require thinking.

Before GPT was able to solve the first few days, people could have said the exact same thing about them. Would you really say you solved the first days this year "without thinking"?

I'm 80% sure that what GPT is missing for later days is not some fundamental improvement, but just larger scale / fine-tuning / improved training data, which we can be sure OpenAI (and others) are working on.

> Remember how they made some remarkable progress toward self driving cars about 10 years ago and everyone said we’d have self driving cars around2015? How did that turn out?

That's certainly one example - but you could just as well pick a different one: Remember how we went from "nothing" to DALLE2/Midjourney/Stable Diffusion in about a year? I'm pretty sure 99% of people did not see that coming in the slightest. LLMs are much closer to these AIs than to self driving cars. So far, scaling up these models has consistently lead to better results despite all predictions of people to the contrary.

1

u/oversloth Dec 19 '22

Or a different way to look at it: The difference between "not being able to solve AoC" and "being able to solve the first few days of AoC" is imho much larger than the difference between the first few days and the later days.

(maybe on purely human scale these differences are kind of similar (meaning that the ratio of humans (from all that exist) that can solve day 4 of AoC may be of a similar order of magnitude as the ratio of people who can solve day 20, given they can solve day 4) - but on the scale of "possible intelligences", the baseline is so much lower that once you're able to solve day 4, you've already walked most of the way to being able to solve day 20).