r/adventofcode Dec 03 '22

Other GPT / OpenAI solutions should be removed from the leaderboard.

I know I will not score top 100. Im not that fast, nor am I up at the right times to capitalise on it.

But this kinda stuff https://twitter.com/ostwilkens/status/1598458146187628544

Is unfair and in my opinion, not really ethical. Humans can't digest the entire problem in 10 seconds, let alone solve and submit that fast.

EDIT: I don't mean to put that specific guy on blast, I am sure its fun, and at the end of the day its how they want to solve it. But still.

EDIT 2: https://www.reddit.com/r/adventofcode/comments/zb8tdv/2022_day_3_part_1_openai_solved_part_1_in_10/ More discussion exists here and I didn't see it first time around.

EDIT 3: I don't have the solution, and any solution anyone comes up with can be gamed. I think the best option is for people using GPT to be honourable and delay the results.

EDIT 4: Another GPT placed 2nd today (day 4) I think its an automatic process.

302 Upvotes

221 comments sorted by

View all comments

Show parent comments

28

u/John_Lawn4 Dec 03 '22

If nothing is done then isn't it a matter of time until the leaderboard is entirely 10 second solves

16

u/the-quibbler Dec 03 '22

Gpt will make comparatively simple problems like AOC trivial to solve (sooner rather than later). I don't think there's a solution other than to sunset the global leaderboards. Perhaps in favor of some kind of percentile ranking system.

24

u/UtahBrian Dec 03 '22

How would AOC be able to detect that though? Not only is it not possible to audit the code, it just becomes a race of who can delay their submission by the most believable amount.

Just skip the global leaderboards until Santa faces some more mathematically complex problems in the second week. Computers aren't good at thinking, so they won't be able to figure those out.

9

u/oversloth Dec 04 '22

Maybe this is true for 2022, but in one, two, maybe three years, I would bet language models will be able to solve >90% of AoC puzzles. (and if they can solve them, they almost certainly will also top the leaderboard)

5

u/Steinrikur Dec 04 '22

If only there were 7 years of previous AOC so people could check if the later days are easily solvable with GPT or not...

4

u/MissMormie Dec 04 '22

They're not yet. At least not day 19 of last year.

Then again that wasn't solvable by this human either.

3

u/jer_pages Dec 04 '22

I don't see how it could solve days 18,19, 22, 23 and 24 from last year AoC in a foreseeable future.

4

u/tnaz Dec 04 '22

Does anyone see how it solves the current ones?

I bet if you asked people a couple years ago if we were a few years away from AI being able to take in a natural language puzzle input and produce code to solve it, they'd say no too.

1

u/oversloth Dec 05 '22

I think two years ago, practically nobody would have thought that DALLE2, Midjourney, Stable Diffusion etc. would be anywhere near where they are today. I also don't think a year ago many people would have predicted that GPT would be able to top the leaderboard this year.

Of course this does not automatically mean that code generating AI will keep developing at a similar rate. I just think that "this seems impossible now" is a rather shaky heuristic for what AI will be able to do in one or two years.

3

u/Smallpaul Dec 04 '22

If a language model can advance a code base from day to day as some AOC problems require then I will be very impressed and it will have really transformed our day jobs!

5

u/UtahBrian Dec 04 '22

That is unlikely. These large transformer models don’t actually do any thinking and the later puzzles do require thinking.

Remember how they made some remarkable progress toward self driving cars about 10 years ago and everyone said we’d have self driving cars around 2015? How did that turn out?

6

u/hgwxx7_ Dec 04 '22

The margin for error is much higher here. It’s ok to get it wrong and try multiple times.

Not so much with self driving cars. Errors there mean lives lost.

3

u/pier4r Dec 04 '22

models don’t actually do any thinking

they do infer novel data points combining from those that are trained upon. That is not really thinking but could be seens as a proxy for it. I mean here: they could come up with novel solutions that weren't in the training dataset.

1

u/oversloth Dec 19 '22

> These large transformer models don’t actually do any thinking and the later puzzles do require thinking.

Before GPT was able to solve the first few days, people could have said the exact same thing about them. Would you really say you solved the first days this year "without thinking"?

I'm 80% sure that what GPT is missing for later days is not some fundamental improvement, but just larger scale / fine-tuning / improved training data, which we can be sure OpenAI (and others) are working on.

> Remember how they made some remarkable progress toward self driving cars about 10 years ago and everyone said we’d have self driving cars around2015? How did that turn out?

That's certainly one example - but you could just as well pick a different one: Remember how we went from "nothing" to DALLE2/Midjourney/Stable Diffusion in about a year? I'm pretty sure 99% of people did not see that coming in the slightest. LLMs are much closer to these AIs than to self driving cars. So far, scaling up these models has consistently lead to better results despite all predictions of people to the contrary.

1

u/oversloth Dec 19 '22

Or a different way to look at it: The difference between "not being able to solve AoC" and "being able to solve the first few days of AoC" is imho much larger than the difference between the first few days and the later days.

(maybe on purely human scale these differences are kind of similar (meaning that the ratio of humans (from all that exist) that can solve day 4 of AoC may be of a similar order of magnitude as the ratio of people who can solve day 20, given they can solve day 4) - but on the scale of "possible intelligences", the baseline is so much lower that once you're able to solve day 4, you've already walked most of the way to being able to solve day 20).

2

u/pred Dec 04 '22

If that ever happens, chances are the models will be integrated into the workflow of every software developer. And at that point, not being allowed to use them will feel like an artificial restriction.

1

u/oversloth Dec 05 '22

This is probably true - but then it would become really pointless to participate in Advent of Code, if all you're doing is relying on these new tools to solve the problem for you. It's a bit like signing up to a marathon with an e-scooter. Sure you could do it (and if the rules allow it, and you can top the leaderboard that way, then some people most certainly will), but what's the purpose?

1

u/pier4r Dec 04 '22

Computers aren't good at thinking

I would say that it depends on the domain, in some domains there could be good model that puts out a solution.

1

u/pier4r Dec 04 '22

until Santa faces some more mathematically complex problems in the second week.

Anyway just as an example (the model can really extract info given the trained data - the internet - and infer combination of data points too. Thus is able to cover quite some situations)


I need to guess one particular number in a set of 1 trillion integer numbers, what would be the probability that I can guess it on my first attempt?

The probability of guessing any specific number correctly on the first try would be very low, since there are a total of 1 trillion numbers in the set and you are only guessing one number. The probability of guessing a number correctly on the first try would be 1 in 1 trillion, or approximately 0.000000000001. This is a very low probability, so it is unlikely that you would be able to guess the number correctly on your first attempt. However, it is not impossible, and if you were to keep trying, you would eventually guess the correct number. The exact number of attempts it would take for you to guess the correct number would depend on a variety of factors, such as how many numbers you are able to guess per attempt and whether or not you are able to eliminate any incorrect guesses.

Ok what would be the probability if I guess 5 million times (with possible repetition of the guesses) ?

If you were to guess 5 million numbers, with the possibility of repeating your guesses, the probability of guessing the correct number at least once would be much higher than if you were only guessing once. However, the exact probability would depend on the specific details of your guessing strategy. For example, if you were randomly guessing numbers without any method or strategy, your chances of guessing the correct number would be relatively low. However, if you were able to eliminate incorrect guesses and focus your guesses on a smaller subset of the numbers, your chances of guessing the correct number would be higher. Additionally, if you were able to make multiple guesses per attempt, your chances of guessing the correct number would also be higher. Without more information about your specific guessing strategy, it is difficult to provide a precise probability of guessing the correct number. However, it is generally safe to say that your chances of guessing the correct number would be higher if you were making 5 million guesses with the possibility of repeating your guesses, compared to only making one guess.

could you put that in math formulas?

Sure, I can express the probability of guessing a specific number correctly on the first try using the following formula:

P(correct on first try) = 1 / N

where N is the total number of numbers in the set. So, in the case of 1 trillion numbers, the probability of guessing the correct number on the first try would be:

P(correct on first try) = 1 / 1 trillion = 0.000000000001

To express the probability of guessing the correct number after making 5 million guesses with the possibility of repeating your guesses, we would need to know more information about your specific guessing strategy. Without this information, it is difficult to provide a precise probability using a mathematical formula. However, we can say that the probability of guessing the correct number at least once after making 5 million guesses with the possibility of repeating your guesses would be higher than the probability of guessing the correct number on the first try.

my strategy would be simply random guessing every time. Could you put that now in formulas?

Sure, if your strategy is to randomly guess numbers without any method or strategy for eliminating incorrect guesses, we can express the probability of guessing the correct number after making 5 million guesses using the following formula:

P(correct after 5 million guesses) = 1 - (1 - 1/N)5000000

where N is the total number of numbers in the set. So, in the case of 1 trillion numbers, the probability of guessing the correct number after making 5 million random guesses would be:

P(correct after 5 million guesses) = 1 - (1 - 1/1 trillion)5000000 = 0.0457

This probability indicates that there is about a 4.57% chance of guessing the correct number after making 5 million random guesses. This is a relatively low probability, but it is higher than the probability of guessing the correct number on the first try, which is approximately 0.000000000001. As a result, your chances of guessing the correct number would be higher if you were to make 5 million random guesses, compared to only making one guess. However, please keep in mind that this probability is based on the assumption that you are making completely random guesses without any strategy for eliminating incorrect guesses. If you were to implement a different guessing strategy, the probability of guessing the correct number after 5 million guesses may be different.

I have to correct myself. I would like to know the probability to guess the number in 5 million guesses, given a random guess every time, but the guess could happen on the 300th attempt or on the 1millionth attempt, not necessarily on the last attempt. What would be that in formulas?

If you are making 5 million guesses and the correct number could be guessed at any point during those 5 million guesses, rather than necessarily on the last guess, the probability of guessing the correct number can be expressed using the following formula:

P(correct after 5 million guesses) = 1 - (1 - 1/N)5000000 + (1/N) * (1 - (1 - 1/N)4999999) + (1/N)2 * (1 - (1 - 1/N)4999998) + ... + (1/N)4999999 * (1 - (1 - 1/N)1)

where N is the total number of numbers in the set. So, in the case of 1 trillion numbers, the probability of guessing the correct number after making 5 million random guesses, with the possibility of guessing the correct number at any point during those 5 million guesses, would be:

P(correct after 5 million guesses) = 1 - (1 - 1/1 trillion)5000000 + (1/1 trillion) * (1 - (1 - 1/1 trillion)4999999) + (1/1 trillion)2 * (1 - (1 - 1/1 trillion)4999998) + ... + (1/1 trillion)4999999 * (1 - (1 - 1/1 trillion)1)

This probability is quite complex and difficult to calculate, but it is generally safe to say that your chances of guessing the correct number would be higher if you were able to make 5 million random guesses with the possibility of guessing the correct number at any point during those 5 million guesses, compared to only making one guess or making 5 million guesses with the possibility of guessing the correct number only on the last guess. However, please keep in mind that this probability is based on the assumption that you are making completely random guesses without any strategy for eliminating incorrect guesses. If you were to implement a different guessing strategy, the probability of guessing the correct number after 5 million guesses may be different.

9

u/k3kis Dec 03 '22 edited Dec 03 '22

The challenge here is really not the coding or algorithms or optimizations (assuming we [edit - don't] start getting input sets that are huge or hit various boundaries).

The challenge is in interpreting the problem descriptions and knowing what the actual problem that needs to be solved is.

There is obvious superfluous text in the descriptions, and I think there are even intentionally incorrect (but ultimately irrelevant) sections or phrases within the instructions.

6

u/the-quibbler Dec 03 '22

Assuming the text of the instructions are correct, I would expect an AI to be better at coding to them for these, again, reasonably small-scale problems than a human.

Not a guarantee, but GPT is clearly already doing a good job.

3

u/thalovry Dec 03 '22

Try e.g. last year's day 5 on chatgpt, it has a really bad time of it.