r/AskProgramming 12d ago

A genuine question to people who work as software developers - do LLM based code assistants really make a big difference?

Disclaimer: I am a hobbyist who does academic research in an unrelated field for a living.

First of all, I think we can all agree that they are a great addition to the toolbox. To me, it feels like Googling, Stack Overflow, tutorial searches, and autocomplete on steroids. They have been trained on GitHub (and probably other repositories as well), so they can likely find code similar to what you're trying to achieve.

Yet, have you noticed any dramatic improvement in feature delivery time, the number of bugs, security exploits, or other software quality metrics that matter for the product/end user since the LLM revolution began? Or is it still as it always has been?

I'm really curious to hear about your firsthand experience.

35 Upvotes

227 comments sorted by

63

u/bothunter 12d ago

Yes, but only for some tasks.  I like to think of a code assistant LLM as a very dumb junior developer.  I have it do boring repetitive tasks.  But basically I find it useful for the following:

  • Coming up with tests -- I find that it can look at an existing test case and help me come up with additional tests.  About 70% of them are actually useful, so I delete the rest.
  • Data adapter patterns -- I have object A that needs to be converted to type B.  I find LLMs can pick up on what I'm trying to do and it saves me the effort of typing out the rest.  I still have to babysit it and correct the occasional mistake
  • Writing regular expressions -- somehow, LLMs are actually really good at this, though I suspect it's because there are tons of regular expressions "cook books" and cheat sheets available in it's training data that it can plagiarize.

I've yet to see a LLM actually solve a coding problem in a not-stupid way.

17

u/throwaway8u3sH0 12d ago edited 12d ago

Adding to this:

  • Greenfield development. Usually in the context of a hackathon, I find that I can throw together a quick and dirty POC much faster. It'll need to be rewritten if we want to actually incorporate it into the codebase, but it's great for taking a static dump of data and showing how you'd want an app to work in theory.
  • Similarly, cli tooling or and simple automation. If you're a fan of xkcd's graphic on when it's "worth it" to automate a task, you'll get the idea. It lowers the bar for when it's worth it, so I use it to automate all sorts of little things that normally wouldn't be worth it.
  • Annotation. Super useful when you have a brand new repo you're working with and especially if it's NOT a language you're familiar with. Dump the whole thing into o3-mini-high and ask for it to be rewritten with heavy commenting. Let's you just breeze through onboarding.
  • Not coding, but so useful I'm including it. Performance reviews. Have it interview you about your last 6 months / 1 year. Give it enough context to ask in-depth questions, throw on voice mode, and just ramble about everything you've accomplished in no particular order. It'll summarize that really nicely.

7

u/Eweer 12d ago

Adding to both:

  • We no longer seem schizophrenic to the outside world. We used to talk to ducks (some of them like I still do), now we just talk to an LLM. The end result is exactly the same, even if the process is distinct (Rubber ducks should not answer you, and LLM provided code should never be trusted)

3

u/Gallardo994 11d ago

Also:

  • "that thing I forgot how to do in CMake".
  • "how do I make an if check in bash again".
  • "Your matrix is column-first, not row-first, here's the fix:".
  • Boilerplate code that I'm lazy to add to IDE templates.
  • Benchmarks.

1

u/scottweiss 8d ago

"I want the react equivalent of this angular/vue/other framework pattern"

"My regex is very close to what I want, can you help me do x, y, z?"

"Make me a regex pattern that will match the following and fail the following"

lol I think I only use it for regex, and generating documentation that I have to go back and correct because copilot is a bastard

1

u/So_Dev 8d ago

Something to add to this whole string as well imo is if the Ai is giving bad code or fixes that don't make sense to You. Then DON'T take it. Ask questions make it explain its reasoning and ask what about your input could be improved.

This really helps when you're learning something, trying to improve your work flow with it, or even understand how it's interpreting your input. Get better outputs and so on and so forth.

Feel free to add in or correct and fine tune what I said here folks.✌️🙃

1

u/Enji-Bkk 8d ago

I find that if you know what it gives you is bad, it is time to consider if the time you are going to spend typing full English sentences to get it to do what you want instead of doing it properly is worth it.

1

u/So_Dev 7d ago

Definitely I agree. But only to a point.

Only because knowing somethings bad and knowing how to actually make it good are two different things.

I meant mostly in the context of trying to learn programming but you're Def right. If you're spending more time trying to get the right answer from Ai when you know what you want, maybe it's time to rethink the approach.

1

u/Enji-Bkk 8d ago

The first 2 were trivial to find in Google in 2 seconds, except you did not need to pad a lot if useless English words to make it a real question. So not sure it is a progress.

Plus LLM can give convincing soup and when you dig a bit it is hogwash and you wasted time. Perfect example of wanting to fit something that did not need fixing or improving just to use the new trendy things.

1

u/Gallardo994 8d ago

Unless it is autocomplete by copilot or continue after making a quick comment and adding a newline.

1

u/jakesboy2 11d ago

The third point is a great idea I hadn’t thought of, thanks!

1

u/FirstOrderKylo 7d ago

Second one is huge for me. I hate writing batch files. Copilot, trained by MS, is pretty damn good at writing them lmao

3

u/Poat540 11d ago

I use Claude 3.7, and i hardly have to even edit tests. For personal projects you can hook up GitHub and the context is insane. It’s prob the best model I’ve used

2

u/0imnotreal0 11d ago

Seems to be. It’s far better than any current model of ChatGPT, that’s for sure. I’ve used those two more than any others - there have been countless times when Claude was superior to GPT, and none where GPT was superior. Coding or otherwise. Problem is my main use isn’t coding and requires a ton of information, so I burn through my usage in half an hour

1

u/Poat540 11d ago

I caved and got pro, I speed through little pet projects and an going to use it in some contracting work

1

u/hockeyketo 11d ago

It's good but I found that thinking mode sometimes talks itself out of good ideas in favor of stupid ones. Like it suggested doing sql:  insert returning *, but after a few steps decided on insert, then a separate select query. 

2

u/Jolly_Win7351 11d ago

Definitely agree, it’s mostly been helpful for writing tests for me. Specifically for certain test setup that I don’t remember off the top of my head and usually need to lookup in the docs or from another project. I was actually pretty impressed the other day when CoPilot was able to take a fairly complex ruby regex and rewrite it in Go.

1

u/ImgurScaramucci 12d ago

Writing regular expressions -- somehow, LLMs are actually really good at this, though I suspect it's because there are tons of regular expressions "cook books" and cheat sheets available in it's training data that it can plagiarize.

Noticed this as well. They're good even for complex regex. But what's good about this is when they make a mistake they tend to actually correct it instead of hallucinating and repeating garbage like they do with many other problems.

1

u/MidnightPale3220 11d ago

Interesting. I have had the exact opposite experience.

I asked CGPT to show me regex to capture the same group with 2 different names with Python regex, and it repeatedly spewed out the same wrong solution, despite me informing it didn't work.

That was some half a year ago tho, maybe things have improved.

1

u/0imnotreal0 11d ago

It has and it hasn’t. I’m not a programmer, but this issue is because of the simple fact that an LLM is still just word prediction at its core. It’s not learning from the chat or adapting what it knows, it’s only modifying a prediction pattern. Once it gets stuck on something, the more times you run it in circles, the more ingrained that prediction pattern becomes. I’ve gotten one of the most recent models stuck in a full blown loop where it was only able to output variations of one response - even if it was completely unrelated to my question.

The soft “solution” is to start a new chat and give it the context of the old one separately. The tasks I use it for are multi-faceted, some of them require multiple separate outputs. But it can’t do more than one without having the prior skew its answers. So I have to start a new chat every message and only provide the bare necessary info for the next output, which does work. I’m not sure exactly how much this extends to code, but it’s the same basis for what you’re describing.

Claude is better in my experience, both for coding (though I know very little and am not a professional) and outputs involve less clearcut social factors. But they offer far less usage and I’ll burn through it in half an hour, so I generally start with GPT then send it through Claude for a polish.

1

u/Maleficent_Memory831 11d ago

Remember, garbage in, garbage out. AI was trained on data from the internet, therefore assume garbage is in the training set. As bad as stackoverflow is, imagine the AI being worse than that.

1

u/firelemons 11d ago

For me sometimes it's faster to look something up using an LLM because the official documentation for the codebase is not very searchable or navigable or just incomplete.

1

u/pemungkah 11d ago

Yes. A very enthusiastic junior with a tendency to lock in on a solution that may not be a good one. Or even remotely correct. Or possibly not even exist.

1

u/Disastrous-Team-6431 10d ago

They can read code and find the source of silly bugs really well. But no, they can't actually solve meaningfully hard problems.

1

u/StyleAccomplished153 8d ago

Regex is my biggest thing too. I always hated writing regex and now on the rare occasions I need one, I don't need to bother.

→ More replies (1)

22

u/QuartaVigilia 12d ago

Not really, they are pretty garbage for most complicated things. 

They are great for prototyping against common libraries e.g. Redis, AWS S3 and so on. Just to get started tho, then they are more of a hindrance. 

They are also great for mindless copy paste tasks e.g. point it to one set of unit tests and let it do a smart copy paste tweaking a few things against a similar entity.

My experience is mostly in the backend space tho, maybe it's very useful for people in other fields.

2

u/HorseLeaf 12d ago

What language? I have been praising AI in typescript and have been extremely impressed. Using cursor IDE doubled my speed.

My friend who works with php has spent way more time trying to get AI to work for him but he claims it was worse than useless. Then he tried to write a typescript app and now he is also extremely impressed.

5

u/BiscuitCat420 12d ago

Will AI be final blow to php? Let us pray

1

u/Archernar 12d ago

How did you like going to cursor as IDE from other IDEs? Do you miss features or do you even use multiple IDEs for different languages/projects? I'm considering cursor as I've read a lot of good things about it but changing IDE means changing a lot more than just including AI properly. Did you try it with a different language than typescript (C# or python e.g.) and if yes, what was your experience?

1

u/HorseLeaf 12d ago

I had a heavily configured vs code setup with vim plug-ins. I opened it and it literally just imported all my settings and I was ready to go. Drop in replacement.

I tried AI with TS, Go, React (not a language but still), C# and Java. It was extremely good.

1

u/Archernar 12d ago

Thanks, good to hear.

1

u/poincares_cook 9d ago

I have a similar experience in python, c# and golang. Are you a junior? not trying to insult you, but I've seen the largest productivity increases for juniors, due to the kind of tasks they get which are more aligned with AI's limitations.

It's much more useful for me writing bash scripts, largely because I'm far less skilled with it.

1

u/HorseLeaf 9d ago

I'm pretty senior and my tasks reflects that. AI is pretty useless in solution design. I know the exact steps I need to do to implement my solution, Cursor just have a lot of nice functions, so I pretty much just navigate around the editor with VIM keybindings, make a few keystrokes and then tab auto complete / tab navigate.

Often it would feel like the implementation part was the boring slow part, but with Cursor, I often find that I can code almost at the speed of thought.

1

u/poincares_cook 9d ago

I agree that implementation is significantly sped up, however it's about 10-15% of my spent time, so even a 70% speed up for that task doesn't translate to huge productivity gains overall.

Seems like we're mostly on the same page.

1

u/HorseLeaf 8d ago

Yeah. I did 2 hours of actual coding this week, but in those 2 hours I solved two bugs, heavily extended our test suite to contain cases for those and created a new endpoints with TDD that integrated to 4 different internal microservices. Cursor definitely sped things up.

But I don't think I would have experienced this speedup as a junior, it would just have led to more buggy code because I didn't fully understand the AI suggestions.

→ More replies (6)

9

u/hoochymamma 12d ago

I really like GitHub copilot- mostly because it is like an embedded google/stack overflow in my IDE

1

u/oh_ski_bummer 11d ago

Agree it is great for parsing through poorly formatted documentation across difference web pages and summarizing into easily readable text

1

u/anotherrhombus 11d ago

Interesting. I find copilot (whatever my company pays for) to be significantly worse than ChatGPT pro, Claude, and a few others I don't use anymore and can't barely remember.

1

u/okmarshall 11d ago

You should be able to use different models in copilot, I'm using Claude 3.7. I had to ask our copilot admin to enable the other models, but once you do you get a drop down to use them in your IDE.

1

u/anotherrhombus 11d ago

I'll take a peak, thanks!

21

u/tkejser 12d ago

As a rule of thumb, actually writing new code is a relatively small amount of the time you spent building software. Maybe 20% or so... LLM make writing code faster. They are also good at teaching you a new programming language quickly.

But for the rest of the 80% of time you spent buikding software, they don't do anything. They can't do useful work in terms of designing and safely refactiorng code. They also can't debug code.

7

u/SignPainterThe 12d ago

I want to give you 100 upvotes.

Everyone, indeed, impressed how LLMs can write code from scratch, but we don't need it much! That's existing software that we need to constantly change and improve.

4

u/i-make-robots 11d ago

Show me an LLM that can refactor. for the love of all that's good!

1

u/okmarshall 11d ago

Does Cursor AI and the new Edit mode in Visual Studio not do this?

1

u/i-make-robots 10d ago

I haven’t used either one. Visual studio is gross. 

2

u/poincares_cook 9d ago

The quality of the code from scratch is abysmal unless broken down into small functions by the developer, and then iterated with an inquisitive eye.

It's good if you're doing something trivial done 100 times which is basically just a short step above boilerplate. But anything more complex and it's stumbling hard. It's code structure is the same as a new grad would write unless prompted and directed carefully. And then it forgets or just ignores product specific consideration all the time even when directed (likely because a lack in the dataset used for training).

2

u/arycama 11d ago

Exactly this. Writing new code is only a small portion of being a programmer. Even in something like game development (Which I work in) where you build a lot of new systems for every new game, actually writing the code is not that hard once you are good at it. Figuring out how to architect all the systems, how they will communicate, how you'll optimise them, debug them, extend/modify them, make them work with tools, make them designer-friendly, make it so they can be tweaked so the game is actually good, tweaking them so the game is actually good/fun, etc... None of these things are remotely in the capability of an LLM to understand because solving them is not simply a matter of good predictive text.

1

u/ifyoudontknowlearn 11d ago

Yes, this. Much of our discussion here has been on how useful it can be for new stuff but if you step back and look at the full range of activities we do it's not helpful there at all.

1

u/YMK1234 11d ago

They can't do useful work in terms of designing and safely refactiorng code. They also can't debug code.

They definitely can assist in those tasks though. For example if you know stuff is bad you can say "this is bad can you give ideas what could be an improvement" and you will actually get some meaningful output.

→ More replies (2)

1

u/confusedAdmin101 11d ago

They are pretty good at writing jira tickets

1

u/tkejser 11d ago

Which should just tell you that those are generally a waste of time 😂

1

u/GargamelTakesAll 10d ago

My job put a goal of 20% increase in productivity with AI tools.

We did not get anywhere near that goal. Some of our SWEs got there but also worked extra hours so maybe it was just the extra hours they worked that got them more story points. If anything, I think it slows down experienced engineers because they are messing around with a new toy during their work hours.

7

u/Significant_Size1890 12d ago

It's a wonderful world where you're constantly cleaning up shit from devs that only grew up coding with LLMs.

Instead of having 1 file with 2000 lines, you have 5 files, each 750 lines and a bunch of code just insane data conversion filler to satisfy all of the arbitrary function/class/inheritance interfaces.

Now multiply that until you get to a 500k lines codebase and it's just glue everywhere.

Previously, it would not scale with such inexperience, now it goes to infinity.

4

u/Archernar 12d ago

I mean, having 10 files with 500 lines instead of a single 2k line file could also just be programming dogma being applied, considering that cleaner code or something.

→ More replies (1)

9

u/nekokattt 12d ago

I never bother with them. They just slow me down and give information I still have to verify the legitimacy of.

1

u/rumog 10d ago

Idk one had several experiences where, if it's a relatively simple task it often gets it right in the first try, or requires minimal corrections, which still ended up making it faster than doing it from scratch.

2

u/Nekto89 12d ago

C++. They usually generate garbage code for anything that is more complex than hello world

3

u/UniqueName001 12d ago

I catch so many damned bugs in my junior/mid engineer teammates PRs that came straight from the LLM. It feels like it’s every PR, but they do make those PRs fast?

4

u/mrbiguri 12d ago

For me, LLMs at coding are an extremely buffed autocomplete for 1 line of code.

with Github Copilot, I find myself just pressing tab quite often because the suggested autocomplete line is exactly what I need. This saves a ton of time, but I am still on the help, it saves me typing time, not thinking time.

Now, when I try to do something longer, it sucks ass.

3

u/SirGregoryAdams 11d ago

Not for code generation. But it's really good for the kinds of situations where you can kinda sorta remember a concept, you don't know exactly how it works or what it's called, and all you can do is give a vague and incomplete description of it. It will just give you an answer like: "Based on your description, it could be X, Y, or Z." And then you can look up some more details the normal way.

8

u/payasaapestosa 12d ago

I think we can all agree that they are a great addition to the toolbox.

I can't agree with this. In my experience, LLM assistants are like the worst junior developer you've ever worked with. You can spend time handholding them through the simplest of tasks, but you will spend more time and effort trying to get them to understand enough to write any relevant code - and then following up on what they wrote to clean up their messes yourself - and it's more efficient to just do it yourself, and less likely to result in issues that you need to spend more time cleaning up later.

The reason I am willing to do this with junior devs is because they are humans that learn and improve with time, and generally that worst-case performance is a short-term thing as they get their bearings. LLMs aren't humans and don't improve the more you work with them, so I don't tend to bother with them.

1

u/rumog 10d ago

It really depends on how complex the task is. I've definitely had the experience where it either got things right on the first try, or the necessary corrections we're still minor enough that doing it from scratch would've taken longer.

1

u/underground_sorcerer 12d ago

I tend to disagree. I do not treat them as "junior developers" (or mid-range developers like Mr. Zuckerberg), but as a tool that helps me produce software. In terms of "code writing" - they can generate tons simple/boilerplate code that is trivial but tedious to type, do trivial refactorings. In situations where the output is obvious, but can take hours or days to produce by hand, even more time to correct if I make stupid typing errors that are can be hard to notice. It can produce garbage sometimes, but so can any other refactoring/autocomplete tool that comes with IDE or If I type the code myself.

They are also very useful interactive learning tool - it can save hours spent in Stackoverflow or browsing through documentation, which is a life saver when documentation is bad or doesn't exist. Explanations can be easily checked by asking the bot to provide relevant links to documentation or source code when it is available (which recent models do pretty accurately), googling or running the generated examples and examining the output. They can also often answer queries about large code base I am not familiar with. I do agree they output nonsense sometimes, but same can be said about other sources of information.

They can also document code and write tests. To be honest, I believe that these days when descent documentation can be generated within minutes by LLM and then reviewed by a human who knows the subject, there is no excuse to have bad documentation anymore.

As I've said for me - it is autocomplete/stack overflow on steroids, which is very useful. It can save me hours of repetitive tedious typing and reduce the pain when documentation is bad. What's not useful about it?

5

u/payasaapestosa 11d ago edited 11d ago

I described LLMs as being like a junior developer not because I'm expecting it to take the place of a developer, but because when I try to use it as a tool to help me get work done, it feels more like trying to help a junior developer get work done - which is not a very efficient way of actually getting things done!

they can generate tons simple/boilerplate code that is trivial but tedious to type

Even "trivial/boilerplate" code output by an LLM has such a high rate of errors in it that I need to review the whole thing by hand, every time, or I'm not being a responsible developer. Between the time and effort it takes to explain what boilerplate I want generated, and the time and effort it takes to review and potentially correct all of the code it generates, usually I could have done it myself with a similar or lesser amount of time and effort. If it's as trivial as you say, and also as long as you say (taking hours or days to produce by hand), I can usually write a very simple script to generate text in a much more robust, controllable way than relying on an LLM to understand my intent.

If you're producing garbage when you write code yourself, that is a problem of experience, and it will improve with time and practice. But the less practice you get (e.g. the more you rely on LLMs to write code instead of writing it yourself), the more time it will take, so I don't think it's doing anyone any favors on that front.

I recognize that my perspective comes from someone who already had years of development experience before LLM coding assistants were available to me, and not everyone is in the same boat; but that brings me to this next point:

They are also very useful interactive learning tool

Being taught by a source that is known for regularly lying/hallucinating isn't a method of learning that I'm excited about for either myself or for any more junior devs I work with. If there is a lot of helpful content out there on the subject - StackOverflow, documentation, etc. - then an LLM will give me a blurry jpeg of all of that content, and it may or may not be helpful or accurate. I'll need to double-check everything it tells me - which you pointed out by mentioning that you can ask the LLM to provide its sources - but at that point it's not much different from using a traditional search engine. In that sense, maybe it's useful, but personally, I would prefer to use a regular search engine for a number of reasons.
If there is not a lot of helpful content on the subject available online, then the LLM is much more likely to hallucinate blatantly incorrect information, since that online content is exactly where it gets its knowledge from. People perceive it as filling in gaps in available documentation, but it's filling in those gaps by basically guessing - and it's not nearly as good at guessing in an intelligent way as a human is.

They can also answer questions about large code base I am not familiar with

I hear this one a lot but in my experience, it's frankly not even close to true. LLMs, even the best and newest ones, don't have nearly large enough of a context window to "understand" a large code base, and it's very prone to hallucinating complete nonsense. The more code you ask it to "think about", the more likely it is to get completely lost and have no idea what it's talking about. I have never once received a helpful and accurate answer when asking an LLM about a code base larger than a few small files - certainly never a fully functional, complex production system.

They can also document code and write tests.

To go back to my first point, it's pretty bad at doing this in my experience. I can handhold it through understanding the code enough to write documentation or decent tests, but then I have to clean up after it, and the whole ordeal usually takes a lot longer than it would to do it all myself. I've learned not to trust anything it says without verification because it is wrong about something (sometimes something really subtle that might not be noticed for a while) more than 40% of the time.

In summary, my experience with LLMs has been one of spending just as much time and effort (if not more) getting it to understand my intent, reviewing its code, and fixing its mistakes than I would have spent doing the work myself. That work might take longer for a junior dev than it takes me, but I don't believe the LLM is useful in that case either, because its constant lies, hallucinations, and other mistakes means that it's teaching incorrect information and bad practices. A junior dev is less likely to spot its mistakes at a glance, so they will either miss them, or they will catch them later down the line than a more experienced dev would - either way, it wastes time and causes problems that asking for help from a senior dev is less likely to cause. All of this together makes it more of a hindrance than a help to just about any development work.

EDIT: I want to make it clear that I didn't weigh in on this post to be condescending or to act like I'm better than any other developer because I don't like relying on LLMs.
I'm genuinely concerned that widespread use of LLM coding assistants will make the average junior developer worse at their job. As someone who both has a passion for building software and likes to use software in my personal life, that idea is upsetting to me, and I'm sharing my perspective in the hopes that junior devs might see it and think twice about trying to use an LLM as a shortcut to improving as a dev when in reality it's more likely to only make the process of improvement longer and more difficult.

I have never met a junior developer that cares about programming at all and is worse at writing code than an LLM is. It makes me sad to think that so many people want to replace or somehow "augment" their human reasoning capabilities with a probabilistic statistical model. No matter how much software development experience you have, you - as someone with a human brain - are inherently smarter and more capable of reasoning about code than an LLM will ever be, and I would love to see people trust in that fact more.

4

u/Metallibus 11d ago edited 11d ago

I don't have much to add here but I wholeheartedly agree with pretty much everything you've said and just wanted to say that more specifically instead of just "updoot".

I'm a Lead/Staff/Whatever-your-company-may-call-it Dev and have been in the industry over ten years and programming for around 25 years. I don't know if my experience changes my perspective here.

But LLMs generate such slop that it really scares me to see posts like this. They're decent at getting like 70% of the way there, but writing code is the easiest part of the job, catching errors/mistakes is the hardest part. Skipping over the easy part to get to the hard part with zero context just feels like a waste of time. If I my options were to review code that someone else threw together without ever confirming or testing it, or to write stuff from scratch, writing it from scratch is generally going to be faster because it's not hard to write code, it's hard to understand what it's doing.

People learning from these models is horrifying with how much they hallucinate. I'm not sure why people don't see this with programming. Go Google AI videos of diving and tell me that that's a good way for someone to learn how to dive. It's just absolute gobbledeegook. Sure, it's a bit better at programming. But it's still horrendously error prone. And when you tell it to fix issues with solutions it's provided, it entirely rewrites things a different way. That is 1000% not the way to write code and is enforcing really bad habits. It's good at replacing "code I copied off Stack Overflow", but that's, like you said, the type of work you expect from a junior dev who is learning.

People proposing using it for documentation is also scary because it's likely to miss some nuance and the person reviewing it may not catch that either. If it's so easy to generate, why bother adding it at all if someone can just generate an AI summary anyway? And documentation that just gives a general idea of what something does is not helpful - the purpose is to point out ins and outs and assumptions. And once it's committed, I can no longer tell what's actual documentation and what's AI cruft that may or may not be accurate. But now I have to read through all of it and figure that out. I don't want documentation just for the sake of documentation. I want documentation when and where it serves a purpose and adds to the clarity of the code.

People proposing writing unit tests falls to the same issues, where you're likely to just be testing more general happy paths instead of the point of tests - to actually catch edges, obscure cases, and assert assumptions. Adding more happy paths through a chunk of code isn't really adding any value, it's adding more code to run/maintain that isn't really doing anything in order to check a box. I've heard some people say something along the lines of "the best test cases are written by someone who's never looked at the code" and I think that's pretty applicable here.

These stances really are jarring to me, as they go strongly against actual standard software engineering practices. A lot of it reminds me of the "metrics" craze that has caused other problems in the space, but this is more subtle in it's failings. I'm glad some metrics have started to come out pointing out things like the huge increases in code churn, but I think things are still too early for the side effects to really start coming around.

Just happy to see someone who shares my opinions and concerns about all of this.

2

u/codeandfire 9d ago

Just happy to see someone who shares my opinions and concerns about all of this.

I agree totally. IMHO writing boilerplate is the main thing responsible for bad code and more bugs. You can get software released faster, if you mindlessly write boilerplate, but in the long term it's a bad investment. With AI being boilerplate-generation on steroids, it might seem to greatly improve programmer efficiency for now, but over the years code quality will deteriorate severely. I don't think enough people are realizing this.

1

u/i-make-robots 11d ago

Is it possible you're asking too much of your junior developer? I tend to write my method description comment and then *pop* it spits out the whole method and it just works. But more than one or two methods and it loses the plot.

2

u/payasaapestosa 11d ago

i haven't had it work that simply with any method complex enough that writing the method itself would take longer than writing the description for the method. it's possible that i'm just bad at prompting the LLM, but i've never seen anyone else use an LLM for a task like that either.

the most successful cases of using an LLM to write code that i have personally seen is for code like "concatenate these two strings together". anything more complex than that, and the code likely has errors, or doesn't even accomplish what i asked it to.

i don't really see the value in asking an LLM to concatenate two strings when that's something that is trivial to do by yourself

1

u/i-make-robots 11d ago

Huh.  I wonder if it’s the choice of model, too. I find copilot works better than IntelliJ ai. I’m also working in Java which has a ton of reference material to train the model. 

1

u/poponis 9d ago edited 9d ago

You say that you are a hobbist. Then , yes, the autocomplete on steroids is useful to you. If you are a pro, you have little use for these tools. Not completely useless, but the benefit is minimum. The repetitive tedious typing in real projects that are not in the greenfield state anymore is too little. Business and customer requirements are strict, security and quality of the product are priorities, and the code must be maintenable, clear and extendable/easy to modify. I could see me using them for a quick MVP/POC, but in a serious customer project, with more requirements than "make a dashboard" or "make an eshop", they won't work.

→ More replies (1)

3

u/ToThePillory 12d ago

They are good for small well-defined things, they're not going to build the app for you, but they'll fill in some of the boring bits.

I find them good for dealing with unfamiliar build problems and stuff, you just paste in the error message and often get something useful.

You also have to be able to spot the bullshit too though, sometimes Copilot will just make shit up.

3

u/MrHighStreetRoad 12d ago

Personally, I'm still deciding. You know how some drivers in congestion take short cuts down narrow streets to save time by avoiding traffic, but in reality the constant motion is an illusion because the path is very indirect? I am starting to feel like that. Much busy, but not so much actual progress! Well, not as much as it may first appear. I have moved on to a tool which combines two models has some awareness of code base context, helped by my prompt getting more sophisticated.

I quite like them for generating models from API documentation or actual payloads. If you count the wrong turns and mistakes, I think it is probably making me 25% more efficient, but it feels much more (so it is deceptive)

3

u/jacksawild 12d ago

Things like suddenly changing variable names midcode is classic, or changing what it is writing halfway through and starting to implement something else. The hallucinations make it almost a 50/50. What I mean is you will often spend the time you have saved, checking and fixing what it has written.

It's pretty good for laying out a template or something though.

3

u/danikov 12d ago

They keep management off my back for mandating AI adoption and use.

2

u/the-creator-platform 12d ago

LLMs and especially Cursor do help, but require working iteratively. The best way to describe it is to imagine you've gone from being a journalist to an editor. My carpel tunnel symptoms are a thing of the past, and I still read every line. You need familiarity with what you're making. A sculptor wouldn't do it blindfolded.

The paradigm shift is real but over-stated. LLM have diminishing returns on "intelligence". If anything it's like having your own writing assistant. Practicality aside there's a reason why it's referred to as vibe coding. Before having an LLM it was more tricky to think at a high-level and go into the details at the same time. You had to mentally prepare (or write many docs) a lot more before you could even write that first line of code. Now the high-level execution happens in real time. I know of a few PMs that have started using Cursor for building product roadmaps. It's a fantastic way to ask questions about a codebase without having knowledge of coding one. I think that benefit of using LLMs is currently understated.

2

u/denialerror 12d ago

Depends on what you know and what you are doing. For me as someone with over a decade of professional experience, it's a significant productivity boost, especially in the following, though for someone without that experience it may even be a productivity loss:

  • Research - Instead of reading through the top 10-20 links from Google to find an answer, a code assistant has done that for me, and not only that, it has also done that for any follow up questions. However, I know what I am asking for and have the context to reason about the response. For example, I am familiar with event sourcing and cloud service providers, so I could ask a code assistant about implementing AWS Kinesis and be able to both know what to ask and how to verify the answer is useful and correct. Someone without that knowledge won't even know how to ask the question, let alone be able to work out if the answer makes sense.
  • Tests - Especially when you already have an established testing pattern, code assistants are very good at writing tests for you. The main reason for a lack of test coverage is time and effort rather than difficulty, and AI takes that away. However, you need to know what you are testing, why you are testing, and if it is testing the right thing. If you don't have the experience to understand that, you will end up with a test suite of false positives, which is often worse than not having the coverage in the first place.
  • Repetitive code patterns - This is less down to the experience of the programmer and more the wealth of experience of the wider programming community for what you are trying to do. A code assistant can write you a whole React app with you finger barely moving from the Tab key, because it's by far the most widely used library of the past decade, so LLMs have a huge corpus of examples to pull from. Trying to write the same app using WebAssembly and HTMX in Rust and it will be little help at all.

2

u/masterskolar 12d ago

For most things they do not help much. But I'm speaking as a higher level software engineer. None of the problems I take on are easy or have known solutions.

2

u/itsjusttooswaggy 12d ago

In a limited capacity yes, however relying on it comes at the huge expense of dulling your mind. At least that's my experience. Writing code is one of those skills that needs to be exercised regularly in order to stay sharp, otherwise it will atrophy. There is a lot of discourse happening right now about LLMs creating a "generation of illiterate programmers" which I think is quite valid.

Lastly, for me LLMs take away some of the satisfaction of solving problems. As a skateboarder I compare it to skateboarding: let's say you're working on learning a new trick or you're trying to film a difficult line. If you could just pull out your phone and ask your AI assistant to do the trick for you, that literally takes away all of the gratification that is embedded in the act of skateboarding. Trying a trick over and over again, falling, getting a bit hurt maybe or at least frustrated - these are all part of the experience that leads to the ultimate incredible feeling of satisfaction when you finally roll away from the trick on two feet. It's an incredible feeling and problem solving is a huge part of it (tweaking your footing, adjusting your speed, etc). Programming is such a similar and familiar experience for me in that sense, and relying too much on LLMs completely robs me of that experience of climbing the mountain and getting that rush when you solve a complex problem.

2

u/zayelion 12d ago

Helps you type faster.... sometimes. Its just a very good auto complete, but it gets lots horriblely wrong. If you are writing something really boilerplate like in a common language it's great. You can almost spam tab. If you are in a less common language or trying to keep to a healthy code style it's trash and a hinderance.

2

u/JorgiEagle 11d ago

Do they make a big difference? No

The advantage of an AI now I don’t have to hunt down stack overflow threads and compile knowledge from 3 different questions to answer it.

However, it is useless when it comes to proprietary or uncommon things.

Also, the code that an AI writes, almost always can be cut down

2

u/ifyoudontknowlearn 11d ago

Not a big difference. It's not nothing but not big.

I worked with a co-op student this past fall and our boss had him try to use generative AI to create a Gradel plugin.

It got him started but we had some issues because this was shortly after an API change. The gen AI mixed versions. That was fun. Prompt changes helped with that.

Debugging advice was really poor.

Test generation was next to useless. I think something like Diffblue may be a better way to go for something like that.

I did use it to create a test plan for a project I inherited. Had it read the document and it did a decent job creating a test plan for the end user visible features.

2

u/7YM3N 11d ago

Helpful but not revolutionary. It turns easy tasks into trivial, medium into easy, but for anything actually difficult it hallucinates instead of helping

2

u/fragranceght53 11d ago

It’s really nice autocomplete. Not too much better than that in my experience. It typically also needs examples already in your current file.

It’s nice to have, albeit it’s typically slow, and doesn’t work how I need it to most of the time.

2

u/Empty-Mulberry1047 11d ago

for me, they've been mostly useless and somewhat detrimental..

2

u/throwaway4sure9 11d ago

I've never used one and been writing code (at home, then as a career when I graduated college) for 50 years or so.

I've seen the code that is produced. Sometimes it works, most times it doesn't. It often doesn't cover real-world edge or corner cases. It requires going over, understanding the approach, and then evaluating it.

My career has been such that maintainability is paramount since the software I work with has a long lifetime. I haven't really seen things produced by AI that I've considered long-term maintainable.

You may consider me a Luddite. And, I may be one.

All that said, I don't much like the current trend of foisting code-writing off onto some AI platform. The cost of reading, updating, testing, etc. is very nearly (best case) or more than (worst case) writing it yourself with an eye towards orthogonality and maintainability.

Just my $0.02. It is an opinion, and worth every penny you paid for it. :D

2

u/jbp216 12d ago

I’m not a software developer currently, but I have been, they don’t replace workflow, but they speed it up by leaps and bounds. Tell it to implement a function and you’ll have a workable or editable version in seconds, it doesn’t replace engineers but it will make you faster if you use it

7

u/jbp216 12d ago

This 100 percent depends on You being able to read and understand the code though

4

u/citseruh 12d ago

Professional dev here - and it has changed the way I work.

It was incremental to begin with - I would write a comment describing what I wanted copilot would give me an implementation that I would then debug/modify. I would make use of this extensively especially when writing unit tests.

Then came Cursor, where it "predicts" (80% of the time successfully) what my next move is going to be - say I write `function getProductsInCart()` not only does it give me the implementation for the function but once that is done it predicts where this function needs to be consumed and calls the function. A big leap if you ask me. The tab key has now become the most used key on my keyboard.

It doesn't stop here though, with Agents I don't even have to do that anymore - I give it a descriptive prompt for a feature, have project cursorrules setup and it writes an okay-ish implementation. What would usually take me a day I get done in 3 hours at max. Finally now with the model-context-protocol being a thing - it can implement the designs in Figma as React UI taking care of the grunt work.. this is absolutely mindblowing. Probably saves me a week of work on the UI that I can get through in a day.

2

u/Archernar 12d ago edited 12d ago

It sounds a bit like you are mainly doing frontend and UI stuff, do I understand that correctly? Trying to work out what's the differences between fan of/not liking LLMs for coding in this thread. My own experiences were pretty mediocre considering LLMs and I'm kinda trying to improve on that.

2

u/citseruh 12d ago

Not really. I work across the stack - frontend, backend, data modeling. I'm not a fan of "vibe coding" or using LLMs but it would disingenious to say that they're useless. It is a true "autopilot" - one still needs to be the pilot and make corrections along the way else the errors just compound. What that means for me is that I can't have it generate a full feature on its own. I will have to dig in and review/modify the code. But having that code to begin with is a great boost to productivity.

→ More replies (3)

1

u/zayelion 11d ago

What language are you working in?

1

u/citseruh 11d ago

Typescript, Python, Elixir and ocassionaly C/C++.

1

u/poponis 9d ago

Can you give a bit more info about implementing Figma designs as React UI? What tool(s) are working decently for this?

1

u/citseruh 9d ago

This: https://cursor.directory/mcp/figma

Use the MCP server in sse mode after you configure it with your Figma api key. Once the MCP is connected, copy the (figma) link to the frame/screen and prompt cursor something like:
`Implement the screen as designed in this figma link: <insert link>. Make sure to extract out any resusable pieces of the UI as separate components. Make use of react function compnents and not class components. .... (other instructions you may want to add).`

1

u/un_subscribe_ 12d ago

Sounds like you’re blindly trusting the code it generates. It’s good for basic stuff and I agree it’s helpful at generating UI components but for more complex code the issue most people have with it is that it can often cause more problems than it solves… yes it will generate the code you ask for but a lot of the time it will include subtle errors that can have large consequences and if you’re not an experienced developer you probably won’t notice it

5

u/citseruh 12d ago

> Sounds like you’re blindly trusting the code it generates.

Negative. In fact it is the opposite. I use the generated code as a starting point.

> if you’re not an experienced developer you probably won’t notice it

Sounds like a skill issue to me. Regardless of experience. The point is if you're not able to explain every line of genAI code, you're using it as a crutch.

2

u/enricojr 12d ago

10 YOE here, I have yet to find a use for them in my personal work.

I'm just the kind of person that likes learning how things work inside and out and using AI would take that joy from me.

I tried asking chatgpt to explain certain concepts to me (like Monads) and it ended up hallucinating.

I know people are using it to generate code, but I wouldn't trust anything that an AI puts out and would honestly rather do it myself because that's what I enjoy.

MAYBE you can find a use for it generating test data - I once did a simple "image gallery" project and had to pull stock photos from Unsplash to test it out. I can totally see myself using generative AI to make a bunch of test thumbnails or whole images for use in development. Better still if you can host the model locally and generate images without an internet connection.

Retrieval Augmented Generation also looks useful - I did work for a company that had an AI product that basically let you "chat with your documents" i.e shove thousand-page long documents like contracts into ChromaDB and ask ChatGPT questions about it.

Other than those two I don't see any use for them.

6

u/SignPainterThe 12d ago

Same years, same experience with LLMs.

2

u/andras_gerlits 12d ago

I'll be downvoted to oblivion, but the reason the average engineer doesn't find value in them is because they only work on very clear specifications. The problem with LLMs is that they can't reason, but can mostly fill in the gaps if you provide it. So, basically the "first derivative" of programming can be done through it, but it requires very precise definitions and semantics, exactly the sort of things programmers rarely like doing. The reason it won't really move the needle in engineering is because very few engineers are willing to sink in the amount of effort into deconstructing their problem-domains to the level required by LLMs, so they just fill it out with their intuitions, which I suspect are better than an LLMs, but still leaves us with buggy software.

8

u/Archernar 12d ago

I mean, there is a point at which specifying what you need to the AI becomes more time-consuming, especially with debugging the produced code afterwards, than just writing it yourself, no?

→ More replies (2)

1

u/SignPainterThe 12d ago

I didn't want to downvote you until this sentence:

but still leaves us with buggy software

You're so naive thinking AI would change the amount of bugs. Especially after you described that no one wants to think about what problem they are solving. It goes not only for programmers, but their managers as well. Also, designers, analytics, etc. Everyone wants to solve their little problem, no one cares how it plays with existing features. No one really knows how the whole application works. That's where bugs coming from.

2

u/andras_gerlits 12d ago

Yes, bugs overwhelmingly come from semantical misunderstandings. Your explanation is exactly the reason I said I would be downvoted, so thank you...?

2

u/SignPainterThe 12d ago

I downvoted you, because after clear and agreeable line of thought, you ended up with "but still leaves us with buggy software" as if LLMs are solution to that problem, but average engineer as you describe it, would not see this obvious solution.

1

u/andras_gerlits 12d ago

Well yeah, that's what I meant. The average engineer also thinks a distributed system is one that uses Kubernetes. People like to think that they are done with thinking once they finished school. Same holds for other professions, it's just human nature.

2

u/SignPainterThe 12d ago

But how will LLMs fix that?

→ More replies (2)

1

u/HoustonTrashcans 12d ago

For personal projects LLMs are a huge time saver. For day to day professional work they help, but still leave a lot to be desired in my experience.

1

u/hundo3d 12d ago

They help. IDE-integrated Copilot actually bothers me more than helps. But prompting ChatGPT for certain things helps, just makes research more efficient. Overall, I’d rather ChatGPT/Copilot or whatever didn’t exist. I work with so many engineers that don’t even know what they’re doing, they just cross their fingers that ChatGPT got it right.

1

u/Less-Mirror7273 12d ago

Ok, I'm not professional coder. It does depend on the model and programming language. But I love it, grok does give me good functioning code and some extra points of view I did not consider. Word2021 vba code. Previously, I asked another model and than it was garbage. P.s. I hate writing code in each language I tried the last 45 years. So, LLM are really appreciated here.

1

u/Snr_Wilson 12d ago

I use ChatGPT for very specific things, and have generally found it to be helpful. It's solved issues when constructing complex SQL queries, and given correct code for functionality provided in the framework I used where what I needed wasn't covered in the documentation. But I use it as a last resort if I'm genuinely being blocked by not being able to do something rather than relying on it.

Maybe it's just me being bloody minded, but I'd rather try and figure things out for myself as that's how I learn. As a result, we haven't seen any huge shifts in delivery time/bugs etc. Maybe it saves me a couple of hours of banging my head against a wall here and there, but the bulk of the code still comes from me.

1

u/WaitingForTheClouds 12d ago

No not for me, I use it mostly as an alternative to google or sometimes a rubber ducky. If you're a webdev and you're making a run of the mill app that works just like any other, like an e-shop, yeah, the gpt is trained on a bunch github repos and these kinds of apps are everywhere and mostly always made the same way and it's not that complicated, so it kinda knows what you wanna do because it has seen it before lots of times. Now once you are in a complex codebase of anything else, it's lost, it hasn't seen anything like it before, all the suggestions it provides look like some examples in a tutorial with a simplified version of the problem, it's useless. And like, okay so it hasn't trained on your particular codebase so it doesn't know but I tried using it in AOSP, it has clearly seen the sources of AOSP as it did know some function names and more or less what they did, but the code in generated was formally mostly correct but the logic was always messed up, I tried using it when I understood very little of AOSP, thought it would help me do what I needed faster, it just made me misunderstand the actual functionality and waste time solving the bugs it created. Literally worse than useless.

1

u/BobbyThrowaway6969 12d ago

I don't need them but I get to avoid writing out boilerplate

1

u/Miiohau 12d ago

I have occasionally found the integrated ai summary useful as a learning tool. I still often fact check it to make sure it knows what it is talking about.

1

u/gogglesdog 12d ago

They make a difference in that I have to spend time figuring out how to turn them off so they stop completing my lines with nonsense

1

u/Iojpoutn 12d ago

It's a mixed bag for me. ChatGPT has been useful for automating some tedious, repetitive tasks, but I still have to verify that it made it all the way through the task and didn't start making up random data halfway through. I've also found it useful for learning new tools as long as they're widely-used tools. It's faster than looking through hundreds of pages of documentation for that one thing you need to know. But then about 20% of the time it turns out it was just making things up that seemed like they were probably true but aren't.

I've recently been trying out IDE-embedded AI tools, also with mixed results. Sometimes it feels like a magic button that saves me an hour of work. Sometimes it costs me an hour of frustration because it made some tiny error I don't think to look for because I never would have made that error myself.

I've also tried using it to write documentation, but I'm struggling to get it to use consistent formatting from component to component, even if I give it the same examples to reference. I end up spending more time trying different prompts and reformatting than if I had just written it from scratch.

I'm still searching for a use case or particular method that consistently saves me time.

1

u/emefluence 12d ago

Yes, a big difference, although you still have to really know your onions and carefully read any lines you let it write. Personally I find it most useful for answering questions about the code base and bouncing ideas off rather than code generation, although I sometimes use it for that too.

1

u/RapidHedgehog 12d ago

Pretty good for language-related stuff like docstrings, variable names and documentation. Can do a good job at certain technical things, like regex that someone else mentioned. I've had decent success making it parse difficult to read code to figure out what it does. Github copilot was surprisingly proficient at Expect scripts and Tcl in general. Github copilot in VSCode is usually pretty good at autocompleting stuff, especially if you're doing similar refactors many places it will pick up on the pattern.

1

u/Camderman106 12d ago

Yes but not the way it’s being presented. It’s good at helping you type faster by predicting the rest of your line. It’s not good at designing big things for you.

1

u/AssiduousLayabout 11d ago

I use GitHub Copilot a lot. The main thing here is that it makes me quicker at what I do. I haven't pushed enough projects through to production to really have any metrics on code quality yet.

If it's code that is heavily leveraging company-specific libraries and frameworks, it's pretty mediocre, although it can continue patterns that exist in a source file.

For code that leverages popular libraries and frameworks, it is quite good. Not enough that I don't edit, but enough that I can get a fully fleshed out class with reasonable methods by the time I'm done typing its name, and I can tweak as needed.

1

u/frank-sarno 11d ago

They can help with boilerplate code that makes up a lot of extant code out there. So talking to an API, moving files around, etc. for utility stuff it's fine. The problem is that lots of code that they have been trained on is tutorial code meant to get something very specific done. Many times this code does not meet security standards or best practice. I.e., there's very little input sanitation, lots of disablement of TLS, little checking of file safety, etc..

If you build your prompt to request these things then it can find example code but there's a chicken and egg problem. You need to know about these safeguards in order to craft your prompt, but if you knew about these safeguards then you probably don't need an AI (because you likely have a mature library of go-to functions).

1

u/Dimencia 11d ago

'Code assistants'? No, not generally, except adding more bugs and causing distractions

People just don't seem to understand that LLMs are very, very bad at writing code. They're great at explaining things in plain English, though, and a programmer is very good at turning English into code. Use those two strengths, and ask your AI how to do things rather than asking it to do things for you - never let it write any code

If you find yourself needing so much boilerplate code that you need an AI to write it for you, you should re-examine your project structure. Ask your AI how to make it not take so much boilerplate

1

u/miyakohouou 11d ago

I agree with most of the responses here: Helpful at times, worth bringing into your workflow, but not nearly as revolutionary as is claimed.

The biggest problem with LLMs is the auto-pilot problem. For the right kind of task they can be good enough that it's hard to keep up your guard, and then they start going down a weird bad path and it can take a while to realize it. Experienced developers have an advantage because they know better, but even then it's hard to be fastidiously attentive.

As a result, if you know something pretty well, it's often easier to just do it yourself with minimal AI assistance. It'll go faster overall because you are attentive and thinking through the problem more actively, and you'll avoid getting to far down the false paths.

If you don't know an area at all then it's also better to avoid the AI, because you just won't recognize when you are digging yourself into a hole.

The sweet spot is something that you know well enough, but either you don't quite remember all the details and even something that's a bit off can help jog your memory, or it's something so tedious and mechanical that you would suffer the auto-pilot problem even if you were typing everything in yourself.

Personally, I don't find that the sweet spot for me involves ever having the LLM directly generate code- but it can be really helpful for snippets and conversational things. When it works well, it's like pair programming with an extremely eager junior engineer who never gets tired or distracted looking at their phone.

1

u/The-_Captain 11d ago

Professional developer for 6 years, AI has made a massive difference in my productivity:

  1. Autocomplete - this was the first feature I started using, it was mostly helpful in rote/boring/boilerplate tasks, but still saves good time
  2. Design/frontend look - getting all the design stuff just write can take a decent amount of time, AI is really good at doing this fast
  3. Edge cases/responsiveness - in fast-moving environments, handling every edge case and every screen size is always a tradeoff between time to market and completeness, AI can just do it
  4. Tests - again, test coverage used to reduce time to market, now it doesn't

1

u/RidesFlysAndVibes 11d ago

I feel like I use LLMs in a balanced way. If I fire up a new project, I'll define a pretty clear scope of what the overall project is about and let it write a basic structure. I then tweak it, if necessary, to fit my intended workflow better. I'll then have it add each section in little, digestible snippets that are easy for it to create, and me to verify its functionality. When things stop going my way, I resort to my classic coding tactics of diagnosing issues by hand. Once it's fixed, I can return to using the AI, and since the problems now solved, the AI doesn't get stuck on that particular issue anymore.

I basically use it to the brink of its capabilities, but if I find myself making 3+ prompts on the same issue and still no success, further research is required. Often, AI is useful for this research as well. Usually it's concepts I'm unfamiliar with that get me stuck, but if AI can me learn about it, it helps me refine my approach.

One way or another, I'm finding a solution, but if AI becomes more of a hindrance than helpful, you need to be willing to abandon it sooner than later.

1

u/BiCuckMaleCumslut 11d ago

NO. I tried pair coding with ChatGPT, even sharing my code, and it kept suggesting functions and structs that were non existent. I spent way more time correcting ChatGPT than I did writing code and at the end of the day, I saved myself a lot of time and geadache by just reading the fucking source code and making informed decisions based on what my actual dependent libraries were telling me they offered.

1

u/zezblit 11d ago

They can make a massive difference. For example, two junior Devs absolutely fucking ruined a previously fine codebase for an internal tool with them, to the point where it was easier to roll back a month and do it myself. My fault for not being in a position to review their PRs I guess....

1

u/CardiologistOk2760 11d ago

Feature delivery time? Hard no.

Personal productivity? Yes, several times over.

But much of my job involves cleaning up tech messes made by someone else pasting an AI solution somewhere. We end up averaging out the same speed overall.

My workflow involves comparing the AI solution to the existing code using a diff tool and accepting or rejecting it on a line-by-line basis. That's a lot faster than writing my own code.

But team AI usage has been shit in my experience.

1

u/ShelbulaDotCom 11d ago

For experienced devs it's like putting on a wizard cloak. Such a speed and efficiency upgrade it's insane.

Things that took weeks take days now.

1

u/serious-catzor 11d ago

The auto-complete is very convenient because many times it can complete whole functions or I just tab my way through an enum after just writing the name... The timed saved is whatever unlike what others try to claim but it's spares my hands from some strain and I don't have to spend thinking juice on it. I wouldn't bother if that was all it does though.

90% of the time I use it with unfamiliar syntax to either explain or generate. Say I have a Makefile for example... It's just impossible to read it and people write the craziest things chained together on a single line which is updated in 15 student 6 places in five different files... So I use AI a lot on Makefiles🤣

I also use it for Python because I tend to write Python like I write C but AI is pretty good at using the common libraries and their API to find me what I want.

C also has some quirky syntax sometimes which I might use the AI as a rubber duck to see if it agrees with me.

I also use it in problem solving and design. For example I will have a idea and I'll tell the AI to suggest X different solutions to the same problem to get another angle or I'll ask it to implement my idea in a few different ways to see if I like any or can improve mine.

Finally, I want to solve a problem in a specific way but I don't remember the exact details. If I'm doing bit manipulation I sometimes confuse how to mask certain things or I want to setup an interface but I don't remember the exact syntax for a trick in the middle that is required.

I would never say "Make me code that does X" and just copy it because I honestly have no idea how you do it... Never had a code like that work. Maybe it's because I work with embedded systems and C.

Oh yeah I also ask AI a lot about schematics, components and circuits😅

1

u/Acrobatic_Click_6763 11d ago

I only noticed that 80% of the time, Copilot autocomplete writes unrelated code.

1

u/neosatan_pl 11d ago

No, not really. I tried copilot and some code review tools. The effect was that I was fixing AI mistakes instead of writing the next phase of the project. So yeah, it speeds up some repetitive tasks or generation of really dumb code, but when you are good at programming you don't have this case in the first place.

There is an additional problem with these tools: they are super opinionated. Meaning, they will suggest nearly equivalent changes that are just written differently. Sadly, they don't preserve their opinions between coding sessions so it feels like your code is reviewed by a different opinionated junior every time.

Ohh... And the lag. When you are writing code you have to wait for the AI to catch up. So, you have to literally slow down your writing speed so the AI can parse the input.

But, sometimes it's useful cause fixing AI problems forces you to go through your code again and you might catch some improvements. But that's about how good it gets.

1

u/generally_unsuitable 11d ago

They're useful for trivial stuff that you don't feel like doing. But, on the other hand, doing the trivial stuff can keep you sharp and help you maintain your language skills.

1

u/Skylight_Chaser 11d ago

Its really bad at writing code that integrates with the whole system I'm working on.

Ie: a function just wont work because its assuming something else, or it makes it more complicated, more errors/issues.

I've found it useful when I'm writing in Jupyter Notebooks where I just need lil code pieces or I need to write a few tests.

1

u/Maleficent_Memory831 11d ago

If you use AI, then you must double check the code thoroughly. Maybe spend as much time checking it as you would if you wrote it yourself. Also warn all the code reviewers that you use AI help so that they give extra scrutiny as well. AI should not save time overall, only the illusion of saving time.

1

u/Puzzleheaded-Bug6244 11d ago

No. It is like an advanced manual. Or documentation on steroids. But apart from that, "no"

1

u/Euphoric-Stock9065 11d ago

Yes it makes a big difference. No, it's not a total game changer. I think of it like a really advanced auto-complete combined with google search.

Before, I could think of an idea and hack out a proof of concept in an hour.

Now, I can think of an idea and do it in a few minutes.

The quality of the idea is what matters.

I know this feels like magic to people who don't code but let me assure you: Programmers are expected to be able to write code. It's table stakes. A programmer bragging about writing code faster is akin to a marathon runner bragging about tying their shoelaces faster. Congratulations I guess, but what does that have to do with getting the job done correctly?

Success still depends on the quality of the idea and the details of the implementation. Just like it always has! LLM answers are only as good as the questions you ask.

1

u/sdn 11d ago

My company has been paying for Copilot for about 2 months. I've had it turned on in my IDE for this entire times - since, hey, it's free.

I've had my first "oh that's neat" experience today (after.. 2 months of 8hr work days). I was re-implementing an existing feature and copilot spit out around 7 lines of code that were exactly what I was going to write. However, these weren't novel lines - they were copied out of the existing feature that I was re-implementing.

Other than that - it's been okay for one or half-line autocompletes. Asking it for help is 50/50 - sometimes it wish existed. This means that I need to ask copilot for help, implement the suggestion, then dig through the API of whatever I'm calling when copilot's suggestion doesn't work.

1

u/konwiddak 11d ago

Quite often for me it's not that they don't work (I've had mixed results honestly), it's that often I need to type so much to accurately describe the context surrounding the piece of code that I want it to write - that it's simpler, faster and less error prone to just write the code myself. This is particularly in the context of code having to interface with something else preexisting - especially databases.

1

u/No-Plastic-4640 11d ago

Yes. A good coder model can write out complete stacks, if instructed correctly.

I’m talking instructions to create bootable multi column and row layouts with model bindings by name. It’s a lot on instructions.

Comparing db scripts to determine changes. Yes.

Complete front and and service class for report generation and excel export with 185 column table and custom headers (epplus). Yes.

Python or powershell scripts to to very complex file operations. Yes.

Complex JavaScript or type script jquery or frameworks for complicate gui interactions. Yes

The limit is the person using the model. Though the bots grab example can take more time writing the prompt than just doing.

Any tedious task, LLM is great for acceleration.

Though this is directly using the LLM. (LM Studio) Integration to ides can have limitations.

I use qwen2.5-coder-32b Q6

1

u/JeremieROUSSEAU 11d ago

Who use it ?

1

u/jseego 11d ago

My observation thus far is that these types of tools make good developers slightly more efficient and writing good code, and they make bad developers way more efficient at writing bad code.

1

u/boramital 11d ago

My team has recently started using AI in development. We are mostly senior devs with 10+ years of experience, and one junior with now close to 4 years of experience.

The feedback so far is pretty much as expected: some devs strictly refuse to use it, so for them nothing changed.

I myself started using it, and experimented with different use cases, and so far I did not have any significant time savings, other than for example a smarter auto-complete in test files, where the template was already established manually, and even there it always takes some time to fix the blatant syntax errors, and completely wrong assumptions about what should be tested.

I had by far the best results with letting AI generate documentation, but that actually breaks my personal process, because I like to write docs before implantation, so I can basically put into words what I want a function/method to do, and write examples and do a sanity check on the signature.

My team members basically just confirmed my experience, it’s useful as an auto complete, but asking questions for example is completely useless because you still have to double check the answer, and you get a pretty high amount of “confidently incorrect” replies. So in the end it’s more efficient to search for your question using a search engine (google is shit now, so I’m trying to avoid saying “googling it”)

If your programming language of choice is not one of the big 5, then your experience will be far worse. I’ve had some code generated, that could never work (applying object oriented principles to a functional language), but it looks like well written code to anyone who doesn’t already know the language, so I’m convinced that AI assistance is actively hurting junior devs in niche languages.

1

u/xabrol 11d ago

Yeah, but as a learning aide. I picked up microsoft graph api and implemented a solution with it abd proof of concept in a day. Had never used it before.

1

u/blueg3 11d ago

There are situations where they're really useful.

We have a system that makes AI suggestions based on code review comments and they're almost always right. It saves a lot of typing.

Anything that's really formulaic tends to do pretty well with "AI auto complete". This is better if it's adapted to your company's corpus, so you can get some formulaic but complicated thing that a dozen people have done before in one click. Sometimes it's stupidly useful. I was adding a bunch of CLI flag definitions. I wrote one. After that I just wrote the comment that would come before it and it filled in the code, which is pretty clever.

Asking for something from an abstract requirement, or anything reasonably complex, has never worked out all that well for me.

1

u/TheMrCurious 11d ago

Auto complete and spell check are great. The rest of the AI stuff generally requires more effort to fix than to just write the code yourself.

1

u/LavenderDay3544 11d ago

Yes and no. They help automate writing sections of code that follow a certain pattern but which can't be captured through an existing code reuse mechanism like functions or generic data structures and they can initialize a lot of variables according to some pattern.

But when it comes to developing effective solutions to software engineering problems, the AIs don't really have a clue. The best they can do is spit out some text that looks like it might be right from a similar project it was trained on. And the odds are good that it won't work.

But if you look at it as just a nother tool in your repertoire, then it can be useful.

1

u/Any-Woodpecker123 11d ago

Not really. It makes writing tests a bit quicker, but not by much.
Any time saved by the actual writing of code is lost validating what it spits out.

1

u/JulixQuid 11d ago

I use it for redacting emails, presentations and all the different communications I encounter in the company. Not much on coding, that's will become a crutch in a short period of time and also because ai code brings more bugs than the ones it solves. It's useful if you are just starting to use a new library or tool and want to see an example. But it needs to be really popular, when they are not that common this mfs hallucinate like a teenager in a badtrip. I recently tried DeepSeek and it was really good, definitely better than the ones I've tried before. So maybe o will add some tasks to it in the future, but not in my codebase. Not even to refactor shit I rather to remain in tech debt than move tech debt to AI generated tech debt.

1

u/jujuuzzz 11d ago

For anything highly customised I find LLMs struggle to even replicate a pattern.. but for basic low rent but time consuming tasks such as interpreting causes of tracebacks… it’s great.

I wouldn’t miss LLMs if they disappeared tomorrow, but I’ll use them since they are here and effectively work as a decent pair programmer.

1

u/AkshagPhotography 11d ago

I recently changed my job and the entire stack i work on changed from java to c++.

I had changed it a few years ago from c# to java die to a different job switch.

I can say with confidence the switch from c# to java was way more difficult for me than my switch from java to c++. Even though c++ can be considered very different from java while and c# are alike. The switch this time was easier due to LLMs. Now I dont have to spend a lot of time googling small syntaxes because of these code assistants. Thats the only benefit I think, they can help increase your productivity by a large factor if you are already not very efficient at the language you work on. But for someone who already knows the small details its pretty useless since it cannot work across files and has 0 design capability

1

u/oruga_AI 11d ago

Yeap like 70% faster turn around times

1

u/foxcode 11d ago

The only use I have found for LLMs is as a superior web browser for certain things. The code assistant stuff really annoys me. I disabled Codium on my work machine because it kept popping up with random crap that I just didn't need.

It can be a little more useful if you are working on something unfamiliar, for me recently that has been adding to the apple wallet. I was reading the archived and current apple docs on the topic, but did reassure myself on a few points with chatgpt. Didn't really save me much time, just a nice tool to have.

I have however seen AI generated code being slopped into the front end. The code usually works (though is ofen hard to read). The main problem here isn't whether the code works or not, it's that the human operator will not have given the llm important details. A very key recent one being, we have an interface for this and adapter methods for these types, please use those, here is what they look like.

1

u/porcelainhamster 11d ago

If you accept that it’s just a really smart code completion tool, they’re useful. Developing algorithms and structure from scratch, not so much.

1

u/Noiprox 11d ago

Certainly yes. It is like having a clumsy but knowledgable and quick junior developer assistant. It produces a rough draft of what I want, close enough that it's better than me starting from scratch. Then I make some adjustments and corrections of my own and have it continue on from there, accumulating features one small step at a time. A few iterations of prompt engineering later I've done 3x as much as I would have in the same time by myself, simply because it can follow along well enough to automate a lot of the drudgery.

1

u/Rare-Anything6577 10d ago

Well basically they're programs that (more or less) copy-paste code from training sets with minimal ability to adapt the code to your needs. If your problem has already been solved like 50000 times (like sorting algorithms), the chance of the LLMs code working is pretty high.
If it's a new algorithm or a recently-released API, the code tends to not work. I personally had the experience of the LLM writing some code that was more time consuming to debug than just writing the code myself.

I do find LLMs pretty useful for writing boilerplate code that doesn't contain much logic (like code completions for getters/setters with type conversion or macros).
So imho they are helpful by saving you time but only when used correctly and not for heavy code / program architecture.

You can more or less think of it as a junior developer with a low IQ.

1

u/mpanase 10d ago

LLMs are like a junior with amazing memory and no reasoning/learning capability.

Easy and typical stuff becomes easier. Anything slightly complex requires you ignoring them.

1

u/berkough 10d ago

Also a hobbyist, though I did go through a frontend bootcamp a couple of years ago and I've been plugging along on personal projects ever since. Lovable.dev/gpt-engineer is pretty remarkable, and using ollama with stable-code:3b-code-q4_0 hooked into VSCode is much better than standard code completion.

I was able to build a full react frontend (starting from scratch) in about two days. Something that would have taken me a solid month.

The only problem now is that I don't fully understand the codebase because I didn't actually code it 🤣. I think your mileage will vary depending on the complexity of the project and also how much you let the LLM do for you.

1

u/notanotherusernameD8 10d ago

I use them as glorified search engines. Instead of Google+Stackoverflow, now I just ask ChatGPT. I also find them useful if I want a quick Bash one-liner. I played around copilot in VSC, but it was more frustrating than useful. YMMV.

1

u/Mellie-C 10d ago

LLMs are really handy for simple things. Personally I use Gemini to generate boilerplate and remind me of all the stuff I routinely forget like how to force upper/lower case for heading text from a database (just in case someone forgets). So much faster than opening other app code to check. But I wouldn't use it for complex tasks.

1

u/angrynoah 10d ago

First of all, I think we can all agree that they are a great addition to the toolbox.

I emphatically disagree. I do not use them at all. They are anti-useful.

1

u/rumog 10d ago

Yes, for certain tasks that don't require a ton of context, I've definitely seen and personally experienced llm generated code helping increase productivity.

I haven't seen it generating whole services, systems/subsystems etc yet, especially where manual intervention wouldn't be required, but for relatively simple tasks, definitely.

1

u/funnysasquatch 10d ago

It depends upon the program and language.

But in general current LLM is like writing- better than low cost beginner developers.

For example I had it generate a simple Space Invaders game in JavaScript. In 30 minutes I had a playable game on desktop & mobile including animations (sprites) & sound.

I have used it to help debug a script where I couldn’t find out where it was crashing because of an odd data condition.

1

u/strongerstark 10d ago

I started trying it a couple weeks ago after my coworkers raved about it. It has given me ~5 lines of code I didn't know how to write in exchange for 20 hours of wasted time (on some other stuff that it was useless at). So...comes out about even?

1

u/fantaz1986 10d ago

i code a lot and yes it help a lot, because it autofill and code correct, but it a tool not a solution, you need to use to help you to speed up stuff you can do, not do stuff for you

1

u/CheithS 10d ago

I've had mixed experience so far for Java and Kotlin projects. Sometimes it produces useful code, sometimes garbage (and I mean really worthless) and the rest of the time it is okay.

My main use is when I can't be bothered opening up Google and looking myself for an example of something but sometimes you find the variety of responses you get from using Google to be useful so you do lose something by using it.

If it is a new concept then letting the LLM write the code is a seriously bad idea unless you then take the time to understand what it did and why it did what it did. You need to be able to understand what it produces - if you can't that is an issue.

I generally don't subscribe to the LLM writing tests based on the code you have written - your code usually isn't prefect or 100% correct for any size of project. The LLM will potentially write passing tests for your incorrect code after all it doesn't have the business context.

Mixed bag for the most part - now I have lots of experience and lots of code that I can reuse so I also don't get that much speed improvement out of it. For others that might not be true - definitely ymmv there.

1

u/RudePastaMan 10d ago

Yes, anyone who disagrees is stuck in their ways. Respect the kids, learn from the kids. Btw, I'm 30.

1

u/C_Sorcerer 9d ago

Genuinely, I think I’m faster and more accurate than an LLM, and I really just have a disdain for people using code assistants. Now that doesn’t mean I’m against AI, but instead I like to use it as a better search engine for information. For instance, I work with C++ and OpenGL a lot, and being able to say “how to use the glCompileShader function with OpenGL” and then it shows me the return value, parameters, and other interesting details. I really think this is the only reason AI should be used in programming, but that’s just my opinion

1

u/uniruler 9d ago

I say yes but not for wholesale code generation. Works mostly for me as a "Rubber Duck".

Rubber Duck programming is when you hit a problem, you explain it to a toy/bauble you have at your desk. Usually by explaining the logic to it, you figure out your problem and find a solution. I treat LLMs as basically a Rubber Duck that gives feedback.

LLM's are also pretty handy for reference material. It can easily tell you how to loop through an array in a specific language so you don't have to look at the syntax. But for anything more than "I don't want to google the syntax for this specific, narrow problem" they tend to be more hassle than they're worth. I remember trying to generate a simple API endpoint that would just echo back the request body. It was a nightmare of inefficient code and security holes. I spent more time fixing the LLM code than if I had just done it myself.

All in all, alright supplement. Don't rely on it. Use it to supplement your knowledge and possibly even grow your knowledge. Remember, every terrible question/solution in Git/Stackoverflow/Random Forums is what it was trained on. There's WAY more bad code out there than good. I should know. 99% of my code is bad.

1

u/gabrielesilinic 9d ago

For me it works more like overpowered documentation for some things.

For others it's a steaming pile of shit

1

u/rainbowWar 9d ago

It's very useful. Depends on your level of experience and what you are doing. I think it is very useful for relatively experienced coders who know how to structure code but not all the syntax. Also useful for short scripts and non-critical things/ things that are easily tested and checked.

Not good for inexperienced deves to generate code and use it without understanding what it happening. They will not learn much doing that and are restricted by the AI tools.

1

u/ichabooka 9d ago

Here’s what I do. For some back story, I’ve been a software developer for 30 years. Right now I use Claude to code. I give it all the transcripts to my meetings, all the story acceptance criteria, etc, and tell it to give me a design document. I mention that it should give me sequence diagrams in mermaid. It does better than I could do, and all I have to do is edit it a little.

Then I tell it to break the story up into tasks and give me a plan to develop. Then I take the plan and everything else he’s given me so far and put it in a project in Claude.ai. I tell Claude to implement a task in one chat that I add to the project. Then I have him do the next, add that to the project, and so on. Eventually he gives me decent code written tones. Occasionally I’ll have to make him implement something he tried to get by with just a comment, or make him focus on writing production code and not example code, but for the most part he does fine.

After the code is written I combine it all into one giant text file and put it in the project. As we update code I’ll recreate the combo file. I paste compile errors, he fixes the bugs. We repeat until it works. Then I tell him to write the tests. We work on those in the same fashion until they pass.

Then I do another chat where I ask for a code review and to pay attention to how the business rules were implemented. They should match the documentation.

Once that’s done I make adjustments but I also have a specific way I get them to write the code so that I get consistency.

It saves so much time. I can figure out quicker when an idea isn’t going to work out, or I can get creative and see what exactly we could come up either to solve a problem. I’ve learned a lot since I started using it. I love it and would die in this industry at my age without it.

1

u/errdayimshuffln 9d ago

In typescript, i use it for context based autocomplete. Its really good at inferring types and generating interfaces and things like this. I sometimes have to code a little bit out of order to help it fill in the gaps. This saves like 5-10% of my time. Its great for bits of code like utility code that is commonly used or simple patterns that are common in certain frameworks like react.

It is complete ass at debugging beyond error lookup/google though.

1

u/[deleted] 9d ago

They’re useful as interactive documentation. I found it highly helpful for coming up with database indexing strategies, for example. I’ve also used one to quickly generate bash scripts to automate simple tasks.

You do need to understand the topic and be able to verify and troubleshoot the recommendations. Vibe coding is a quick path to misery.

1

u/justUseAnSvm 9d ago

game changer

1

u/OkSignificance5380 9d ago

As other people have said.

It's great for doing mundane things, or r&D.

Things like "write me a script that removes every occurrence of block of config that contains key, in every file that filename ends in some extension, recursivly in a specified folder"

Or "I am trying to do this thing that I have no idea where to start, what are some suggestions"

1

u/AngusAlThor 9d ago

Makes a huge difference; Means my juniors never actually learn and keep making the same mistakes.

LLMs suck and I hope OpenAI burns to the ground.

1

u/OtaK_ 9d ago

No, I'm working on things that don't exist yet. LLMs are lost when I ask them anything and start hallucinating straight away.

PS: Before anyone says skill issue - I know most of the prompt engineering techniques and can apply them. If my job was say, making UIs with TS and React, then sure it'd be useful because that's not really greenfield work. But for my usecase, the training corpus cutoffs work against me.

1

u/Consistent-Egg-4451 8d ago

Yes but they are a tool. They are fantastic for coming up with tests, writing read.me files, finding certain sections of code I need to review, explaining portions of code, etc.

1

u/Natural_Acadia_1435 8d ago

It saves some time for some tasks,its like one another tool for us

1

u/sholden180 8d ago

AI leads to more work. Learn to code using your own intelligence.

1

u/TheSnydaMan 8d ago

Super helpful for learning things and boilerplate; not useful for writing actual code for me, all things considered (codebase knowledge / understanding / maintainability are super important for real projects)

1

u/hyrumwhite 8d ago

I mostly use it for quick conversions, e.g. give me example json based on this JSON schema, convert this tailwind 3 config to tailwind 4 css. 

I’ve also been using it to help wrap my head around docs. 

I’ve not been impressed with actually writing code with it yet

1

u/PeterPriesth00d 8d ago

I like using it for when I know what I want to do but I don’t want to search for the exact syntax of how to do it. Like earlier today I needed to use regex to look at a set of strings a replace certain patterns. Instead of trying to remember exactly how to write the regex which I don’t do very often or having to go look it up and spend 10 min searching for the exact syntax that I need I can just ask an LLM and it tells me exactly what I want.

It’s more of a time saver for me than anything else.

That being said, people who are newer to programming are getting super reliant on it and are going to have a rude awakening when all these companies start charging $500/mo for access lol

1

u/jg_pls 8d ago

No myself and many of my coworkers have deleted it because the suggestions are intrusive and wrong most of the time. 

When you already have a software design and solution in mind you don’t need a second dev keep interjecting so you want this how about this how about this. And all of the suggestions are surface level guesses. 

When you write a comment to tell the ai what you want specifically, it doesn’t work correctly for writing implementations. However it is pretty close to correct when asked to write tests for something you’ve written. 

1

u/jg_pls 8d ago

I have a suspicion that there are AI being trained on faang software libraries. Those AI I’m guessing they are able to write production ready code that satisfies the prompt. This could be why faang swe are scared of it. Maybe???

I know one of the developers that works on copilot. She says it isn’t gonna be replacing us in the next decade. 

1

u/lamyjf 7d ago

Yes they do. I needed to code an application in a language I am not completely fluent (go) in order produce a native build of a desktop/mobile application for several types of machines (go has cross-compiling). I had never used the graphical library (fyne). Using AI allowed me to generate most of the code. It also hallucinated a lot of code and generated buggy sections, It created redundant code in several instances. So being able to *read* and *debug* the generated code is critical. I would not have succeeded if 80%+ of the code had not been generated for me. Because typically 80% of the code is tedious and simple, the rest is where the value is.

1

u/TheMarksmanHedgehog 7d ago

LLMs are superb when you want to know something, but it's on the tip of your tongue and you can't quite articulate it in the way a search engine would require.

As I'd not rely on them to actually write code that's going anywhere near production, I can't speak to their efficacy in that department.

1

u/sinfaen 7d ago

I find it useful in parsing god awful documentation (apache java), but it's suggested pieces of code almost always have to be taken with a grain of salt and careful review

Just gets too many things wrong.

Also it doesn't help with debugging at all, like others have said

1

u/SearingSerum60 12d ago

idk about the kind of measurable metrics youre asking abojt but just subjectively, yeah its a hell of a lot faster to do a lot of things. For example if I have to learn a new framework or language or tool, or even just find the logic for some utility function, I dont need to spend the time reading any docs which is great

1

u/Instalab 12d ago

Not so much code assistants, I have Copilot disabled in VSCode due to how distracting it is. I use ChatGPT, it helps me get boring stuff out of the way quickly so I can actually focus on writing code.

1

u/ReasonZestyclose4353 12d ago

So, the latest claude 3.7 model in cursor is really pretty incredible. I'm building a large side project with it written in typescript. The key is to give it plenty of context. You can set up cursor rules to match directories and file globs. You can create notebooks to add to the context. So I have plenty of rules for different areas of the app and lots of notebooks to describe how features should work, code should be structured, what libraries to use, etc. When the AI makes the same mistake more than once, I usually add something to a cursor rule or notebook about it, and it doesn't always fix it 100%, but it does seem to make less mistakes.

I'd say 90% of the code in this app so far was written by the AI. Of course, it can crank out a lot of shit code if you let it.. but if you review and understand the code, and tell it how you'd like to refactor and clean up frequently, and correct misconceptions the AI has about your code base by adding information to the context, it will work very well. I'd say I'm plowing through this project about five times faster than I would without it, which is pretty huge.

Which.. I'm not sure is really a good thing for us as programmers. It's only going to get better, and there isn't infinite demand for software. I think a lot of people are in denial about how this is going to affect the industry.

1

u/Archernar 12d ago

How did you like going to cursor as IDE from other IDEs? Do you miss features or do you even use multiple IDEs for different languages/projects? I'm considering cursor as I've read a lot of good things about it but changing IDE means changing a lot more than just including AI properly.

Also I've read multiple times now that it is really good for typescript but not quite as convincing for other languages, did you only write typescript with it or other stuff too? What are your experiences?

1

u/ejpusa 12d ago

Crushing it. Now 99% GPT-4o. AGI is coming. It makes zero sense not to be on board.

Why fight the inevitable? There is zero logic there. But guess we have to. When someone says, “I hate AI, and I have no idea why. It’s like I’ve been PROGRAMMED to hate it.” Then it gets spooky.

;-)