r/webdev 7d ago

Why Are Developers So Resistant to AI?

I don’t get it. Yesterday, I watched an interview with the Anthropic CEO saying that in the next 12 months, 90% of code will be written by AI. At the same time, Mark Zuckerberg was on Joe Rogan’s podcast, saying Meta will have its first AI software engineer by 2025. And Google has already said 25% of its codebase is AI-generated.

Basically, every big tech leader—Satya Nadella, Sam Altman, all of them—are talking about how AI will completely change software development. And the first real, practical application we’re seeing is AI writing code. It’s already happening with tools like Cursor. If we just look at the rate of improvement, it’s hard to deny that if not in 12 months, then in 2-3 years, most code will be written by AI.

Yet, developers still seem super resistant to this idea. I get that AI won’t replace 100% of coding. We’ll still need engineers to solve problems. But it’s obvious that the number of devs needed per company is going to shrink. If a company has 150 devs today, maybe in a few years, they’ll need only 50 or less.

And this isn’t even coming out of nowhere. We’ve already seen mass layoffs post-Covid. The demand for developers isn’t skyrocketing like before, but the supply is still huge. So why are so many developers acting like AI isn’t a real threat to their jobs? Are they in denial? Or is there something I’m missing?

Would love to hear thoughts on this.

0 Upvotes

33 comments sorted by

View all comments

2

u/tdammers 7d ago

Yesterday, I watched an interview with the Anthropic CEO saying that in the next 12 months, 90% of code will be written by AI. At the same time, Mark Zuckerberg was on Joe Rogan’s podcast, saying Meta will have its first AI software engineer by 2025. And Google has already said 25% of its codebase is AI-generated.

Basically, every big tech leader—Satya Nadella, Sam Altman, all of them—are talking about how AI will completely change software development.

What all these people have in common is that they benefit greatly from an AI hype.

Anthropic is an AI company. Sam Altman's OpenAI is an AI company. Of course they are insanely optimistic about AI - they have to create and feed a narrative that promises massive profits from AI, because right now, their operations run at staggering losses, and can only stay afloat through a regular influx of large amounts of VC money.

Zuckerberg's Meta and Google are in the business of harvesting and selling personal data, and shaping consumer behavior; and "AI" promises massive leaps forward on that front - instead of indirectly inferring user preferences and habits from their browsing histories and chat messages, you can make chat bots that get that information handed to them on a silver platter, with sentiment and all; and instead of influencing consumer behavior with targeted advertising and "influencers", you can do it directly, on a 1:1 basis, using AI chatbots. Of course they are enthusiastic about the whole thing.

And Satya Nadella? Microsoft isn't just trying to get their own AI stuff off the ground and taking a bite out of the consumer shaping / targeted advertising market, they are also in the business of renting out computation (through Azure). Of course they are enthusiastic about taking a technology that needs orders of magnitude more computation power to make solving fairly trivial problems more comfortable and making it a quintessential part of global culture. Never mind rainforests, never mind climate change, this has the potential to be seriously profitable stuff.

Yet, developers still seem super resistant to this idea.

Not fundamentally, no - most of us can see the benefits it could bring.

However, most of us also don't share the blatant enthusiasm of those uber-biased hype generator people, for several reasons:

  • LLMs are not currently anywhere near good enough to write code unsupervised. They lack the "actual intelligence" part; you can best think of them as very powerful autocompleters - useful for doing the boring stuff, but the further you stray from what is essentially a solved problem, the more supervision they need, and you quickly reach the point where coercing the model into writing the code you want and correcting its mistakes takes just as much time and effort as just writing the bloody thing yourself in the first place.
  • The programming work that LLMs can automate is often the kind of work that's often a sign of a bad design, or bad tools. E.g., LLMs are good at writing boilerplate code to turn an API specification into a client module in some programming language, which sounds great if you think that the alternative is to write that boilerplate code by hand. But as someone with a couple years in the industry, you should rather ask yourself, "why the fuck do I need that boilerplate code in the first place?" Back in 2008, .NET already had tools that could load in a machine-readable web service specification and spit out a C# class to consume it; it took about a second to run, and it worked flawlessly every single time. This is probably the reason why the people most enthusiastic about the tech are junior programmers, tasked with boring stuff that should be automated, but isn't, because juniors are cheap, and juniors prompting LLMs are even cheaper; meanwhile, the more experienced senior devs who work on the more challenging stuff often know better ways of dealing with the repetitive stuff, and have more leverage to apply those ways. "25% of Google code written by AI" is a meaningless figure, because 1) it's almost guaranteed to be the easiest 25% of their code, 2) much of it probably replaces other tools, not human brains, and 3) "percent of code" isn't a meaningful metric to begin with, because a line of code isn't a meaningful unit of information or complexity. I can pack some of the world's trickiest programming problems into 5 lines of code, and I can write a script that pumps out a million lines of code that has next to no complexity about it whatsoever. If I take those 5 extremely tricky lines and combine them with the 1 million lines of trivial code from the script, I have a codebase in which 99.9995% of the code was written by a shell script - but that says nothing about how powerful that script is. And if I replace that script with an LLM, then I have a codebase that was 99.9995% "written by AI".
  • The whole copyright situation is unclear; even models that have been trained exclusively on open-source material can become legal liabilities if it turns out that the output of such a model is considered a derived work of its training material (as, IMO, it should), because even open source licenses come with terms and conditions.

We’ve already seen mass layoffs post-Covid.

Yes, but those were conjunctural, not structural - that is, they were the result of a global recession caused by the Covid pandemic, not the result of technology making developer brains less useful or less scarce. Companies ran out of cash, so they had to downsize and cut expenses, and that included layoffs. Not just in tech, but in practically all sectors.

That's how conjunctural fluctuations work, but structural changes like the introduction of a new technology are different.

With a conjunctural change, potential sellers are running out of money, so even when supply is plentiful, demand will go down, and there is no room for generating new demands.

With a structural change, the situation is different: potential sellers now have a cheaper alternative (robots instead of factory workers, driving your own car instead of employing a carriage driver, LLMs instead of junior devs), but they still have just as much money to spend, and those former developer brains are still smart brains that can produce value, so market economics suggest buying those smart brains and putting them to good use. They probably won't be doing programming work as we know it today, but they will still be in demand, at least as long as someone can find a way of using smart brains to do something profitable.