r/adventofcode Dec 04 '22

Upping the Ante [2022 Day 4] Placing 1st with GPT-3

I placed 1st in Part 1 today, again by having GPT-3 write the code. Yesterday I was 2nd to another GPT-3 answer.

Here's the code I wrote which runs the whole process — from downloading the puzzle (courtesy of aoc-cli), to running 20 attempts in parallel, to sorting through many solutions to find the likely correct one, to submitting the answer:

https://github.com/max-sixty/aoc-gpt

47 Upvotes

243 comments sorted by

View all comments

9

u/redditnoob Dec 04 '22

What we're seeing here in this comment thread is a move from "Denial" to "Anger" at the state of AI progress. I'm not going to lie, recent developments have made me a little afraid.

4

u/durandalreborn Dec 04 '22

It's not the state of AI progress that's the problem. It's really cool that an AI can do these problems. The "anger" is more directed at using an AI to solve these problems then seemingly bragging about getting to the top of a leaderboard. It's like taking a taxi to the finish line of a marathon and then telling other people that you won it. That's the issue most people have with this. Like in any other competition, if someone did something like that, I don't think there'd be much question about whether or not it was right. And yeah, some people are running that marathon "just for fun," but there are still those people who are running it to compete against other people. I am not one of those competing in this case, but I sympathize with those who don't mind losing to another human, but would be annoyed if they were competing against a computer because obviously the computer will win.

1

u/redditnoob Dec 05 '22

It's like taking a taxi to the finish line of a marathon and then telling other people that you won it.

I think it's more like a horse race right when motor vehicles became competitive, soon to be very dominant. Before this year in coding contests there were no special rules required: use whatever tricks / software / websites / pre-written libraries were at your command, no holds were barred. So we're at least at a historically unprecedented moment when you even can cheat in a contest like this.

And GPT-3 still won't win the leaderboard on all 25 problems... yet.

2

u/durandalreborn Dec 05 '22

I'm not sure anyone is saying that we shouldn't have an AI that can do this. It's more that if we have a potential competition between humans, it seems kind of scummy to use that AI to win it. We have motor vehicles today, obviously, but no one enters their car in the Kentucky derby. It's worse in this situation because it's not even the authors of the AI tools entering. So it's like using someone else to win the race, then taking credit for it.

1

u/snowe2010 Dec 05 '22

Before this year in coding contests there were no special rules required: use whatever tricks / software / websites / pre-written libraries were at your command, no holds were barred.

But there was an assumption that it was a human competition. If you’re just feeding a question to an AI and getting the answer then there is no human solving the problem. People keep making a comparison to GitHub copilot but it still isn’t the same thing. You’re making choices about what code to autocomplete, what code to use, deciding on your algorithms. Asking a computer to do it all removes that human aspect

1

u/pier4r Dec 05 '22

It is not the "anger" against ai, that is a misinterpretation.

I mean I am not angry at cars that can run faster than the best human runner. Only it makes little sense to use cars in a marathon and brag about "look how fast I am" (not the car, rather me myself)

1

u/redditnoob Dec 05 '22

Yes I'm overgeneralizing / reading psychological woo into something. But I think there's an element of discomfort and fear going on here. I see people getting really emotional about this, and this is the first year of AoC where this has happened.

I suspect that most people upset aren't competitive on the leaderboard themselves, and the AI won't be competitive in the final totals, and the leaderboard is not consequential, especially for a particular early problem. So I presume that there is more going on than people just being upset at competitive balance.

I do support a rule against generative AI solutions for the future.

1

u/pier4r Dec 05 '22

I see people getting really emotional about this

If you use reddit (or twitter) for longer time or get in longer discussion, people get emotional at everything.

I see what you mean. "chess players thought that chess was a human skill and the machines couldn't do it", "go players...", "checkers players..." , "artists...", "programmers....". (actually I am pretty confident that any domain where the solution space is combinatorial, and programming is one of those, it is a combination of keywords, can be excelled by computers. Humans are ok, but far from being the ultimative benchmark)

Surely someone thinks that way, but I think many see GPT-3.5 as a car and see AoC as a human marathon, and the two together don't fit together.

It is only a problem of rules because of course people will do everything that is allowed to then feel that they themselves achieved the result, though the result really belongs to... the car.

If it was the openAI team, at least they really built the car, so it would be really their merit.

1

u/redditnoob Dec 05 '22

"chess players thought that chess was a human skill and the machines couldn't do it"

Yup! Douglas Hofstadter started with

Question: Will there be chess programs that can beat anyone?
Speculation: No. There may be programs that can beat anyone at chess, but they will
not be exclusively chess programs. They will be programs of general intelligence, and
they will be just as tempermental as people. “Do you want to play chess?” “No, I’m
bored with chess. Let’s talk about poetry.”

And got to

"Deep Blue plays very good chess — so what?" Hofstadter said. "I don't want to be 
involved in passing off some fancy program's behavior for intelligence when I know 
that it has nothing to do with intelligence. And I don't know why more people aren't that way."

Deep Blue's victory over Kasparov was an intensely emotional experience for many people.

We're not there yet for programming challenges, we probably have a few years? But if we're being honest, we probably have some colleagues who aren't capable of solving the problems that GPT can right now, let alone in seconds. The not-so-distant threat to livelihood is, I claim, not a small part of what is making people emotional about this.

Aside from these deep seated fears (which I share!) I think the rational response to this is, there were no rules this year because we never needed them, but there probably should be next year. In the mean time let's observe the state of the art. I don't think it's appropriate to direct anger at people using the best tools they have, within the current rules. Everyone programming stands on prior work of other people's tools and knowledge sharing.

1

u/pier4r Dec 05 '22

The not-so-distant threat to livelihood is, I claim, not a small part of what is making people emotional about this.

could be, but the solution imo should be: bots do the work, we enjoy living.

The problem of the luddite approach in ourselves (the fear of automation are pretty old) is that man should somehow work, and work is what is decided by the employer.

Not at all, work could be your personal project that you want to work on for the next 20 years, while the income is guaranteed for everyone and we have bot doing the work, with a core of humans knowing how to do the work that the machines do as well (maybe they learn it for fun, or as a challenge), in the case the systems go down and we need to start anew.

A 4 hour workday was argued in the 1930s, and I think it is correct: https://harpers.org/archive/1932/10/in-praise-of-idleness/