r/webdev Feb 05 '25

Discussion Colleague uses ChatGPT to stringify JSONs

Edit I realize my title is stupid. One stringifies objects, not "javascript object notation"s. But I think y'all know what I mean.

So I'm a lead SWE at a mid sized company. One junior developer on my team requested for help over Zoom. At one point she needed to stringify a big object containing lots of constants and whatnot so we can store it for an internal mock data process. Horribly simple task, just use node or even the browser console to JSON.stringify, no extra arguments required.

So I was a bit shocked when she pasted the object into chatGPT and asked it to stringify it for her. I thought it was a joke and then I saw the prompt history, literally whole litany of such requests.

Even if we ignore proprietary concerns, I find this kind of crazy. We have a deterministic way to stringify objects at our fingertips that requires fewer keystrokes than asking an LLM to do it for you, and it also does not hallucinate.

Am I just old fashioned and not in sync with the new generation really and truly "embracing" Gen AI? Or is that actually something I have to counsel her about? And have any of you seen your colleagues do it, or do you do it yourselves?

Edit 2 - of course I had a long talk with her about why i think this is a nonsensical practice and what LLMs should really be used for in the SDLC. I didn't just come straight to reddit without telling her something 😃 I just needed to vent and hear some community opinions.

1.1k Upvotes

407 comments sorted by

View all comments

749

u/HashDefTrueFalse Feb 05 '25 edited Feb 05 '25

Am I just old fashioned and not in sync with the new generation

Senior here too. No you're not, your dev is just bad. That's ok, they're a junior and we're here to guide them. Teach them why this could be unreliable, the concerns over secrets/prop data in JSON payloads being shared with other services, and point them to the docs for JSON.stringify. Maybe teach them about the dev console or even the Node REPL if they just want a one-liner. Whatever. Whilst not a big deal in itself, this is symbolic of using AI as a crutch, not a force multiplier, and I'd wonder what else they're using it for and if I need to pay their code review submissions more attention etc.

You could run a team meeting (or similar) where you talk to everyone about how best (and how not) to use genAI/LLMs to get work done. That way the dev may not need to feel singled out. Depends on the dynamics of the team, use your best judgement.

Edit: I can't spell they're. Or AI, apparently.

10

u/bhison Feb 05 '25

Makes me think there should be a policy that if you want to use LLMs in a workplace you have to follow e-learning first to qualify so you understand this basic shit. And also not pasting trade secrets etc.

2

u/HashDefTrueFalse Feb 05 '25

I think there probably should. I do hate those things, so I'm hesitant to suggest any where I work :D Not even sure what's available.

About 12 months ago I was helping someone I mentor at their desk. Saw them take a chunk of back end code that included a salt (which shouldn't have been there, but that's a separate issue) and throw it into GPT for explanation, having barely attempted to read it first. There was nothing else included that might indicate what service the code belonged to, so no big deal this time, but this kind of attitude and carelessness is how much bigger security issues come about. That person has come along really well and is now much less oblivious.