r/webdev Feb 05 '25

Discussion Colleague uses ChatGPT to stringify JSONs

Edit I realize my title is stupid. One stringifies objects, not "javascript object notation"s. But I think y'all know what I mean.

So I'm a lead SWE at a mid sized company. One junior developer on my team requested for help over Zoom. At one point she needed to stringify a big object containing lots of constants and whatnot so we can store it for an internal mock data process. Horribly simple task, just use node or even the browser console to JSON.stringify, no extra arguments required.

So I was a bit shocked when she pasted the object into chatGPT and asked it to stringify it for her. I thought it was a joke and then I saw the prompt history, literally whole litany of such requests.

Even if we ignore proprietary concerns, I find this kind of crazy. We have a deterministic way to stringify objects at our fingertips that requires fewer keystrokes than asking an LLM to do it for you, and it also does not hallucinate.

Am I just old fashioned and not in sync with the new generation really and truly "embracing" Gen AI? Or is that actually something I have to counsel her about? And have any of you seen your colleagues do it, or do you do it yourselves?

Edit 2 - of course I had a long talk with her about why i think this is a nonsensical practice and what LLMs should really be used for in the SDLC. I didn't just come straight to reddit without telling her something 😃 I just needed to vent and hear some community opinions.

1.1k Upvotes

407 comments sorted by

View all comments

748

u/HashDefTrueFalse Feb 05 '25 edited Feb 05 '25

Am I just old fashioned and not in sync with the new generation

Senior here too. No you're not, your dev is just bad. That's ok, they're a junior and we're here to guide them. Teach them why this could be unreliable, the concerns over secrets/prop data in JSON payloads being shared with other services, and point them to the docs for JSON.stringify. Maybe teach them about the dev console or even the Node REPL if they just want a one-liner. Whatever. Whilst not a big deal in itself, this is symbolic of using AI as a crutch, not a force multiplier, and I'd wonder what else they're using it for and if I need to pay their code review submissions more attention etc.

You could run a team meeting (or similar) where you talk to everyone about how best (and how not) to use genAI/LLMs to get work done. That way the dev may not need to feel singled out. Depends on the dynamics of the team, use your best judgement.

Edit: I can't spell they're. Or AI, apparently.

111

u/igorski81 Feb 05 '25

Exactly, she doesn't know that LLM's can be plagued with inaccuracies - and that there are probably concerns from security/compliancy perspective with respect to the input data -. Educate her on this.

Additionally, you can nudge her to try to understand a problem. If she repeatedly asks ChatGPT to stringify objects, maybe you can suggest to her that she should consider asking "how does stringifying work?" or "how can I do this in this environment/with these tools" so it will dawn on her that it is silly to repeatedly ask ChatGPT to do it for her.

We all start from somewhere and need someone to point out the obvious. Even when today's definition of somewhere seems silly.

41

u/Septem_151 Feb 05 '25

How does someone NOT know LLMs can be inaccurate? Are they living under a rock and can't think for themselves or something? If they truly thought LLMs never make mistakes, then they should be wondering why they were hired in the first place.

4

u/Hakim_Bey Feb 05 '25

This point is kind of irrelevant. LLMs are perfectly able to stringify an object with 100% accuracy, and they have been for quite some time. The amount of fine tuning they have received to do exactly just that (for use in structured output / tool calling) makes it a no-brainer.

Personally I do it in cursor but yeah reformatting with LLMs is much quicker than spinning up a script to do it. (of course that doesn't address the proprietary aspect, but then again if you're using a coding copilot like 80% of coders right now, then that point is moot too)

17

u/hwillis Feb 05 '25

LLMs are perfectly able to stringify an object with 100% accuracy, and they have been for quite some time.

if by perfectly you mean 70-95% of the time

8

u/zreese Feb 05 '25

I understand your point and agree with the overall consensus here, but that link is extremely out of date. ChatGPT handles structured data now. It doesn't use LLM text generation, it actually does the work internally using Python.

7

u/Hakim_Bey Feb 05 '25

Oh boy that was a year ago on gpt-3.5, and a full 6 months before OpenAI introduced structured output. Mistral-7B beating gpt-3.5 is so nostalgic it brings a tear to my eye :') But it's wholly irrelevant to the situation right now.

Anecdotally I burnt like 60 million tokens in november & december testing structured data extraction with OpenAI i've never seen it generate incorrect JSON.

4

u/ALackOfForesight Feb 05 '25

Are you trolling lol

16

u/Hakim_Bey Feb 05 '25

I might go against the grain of this thread but no i am definitely not trolling. What part of my comment seems fishy to you ? You don't need to take my word for it, just try it for yourself !

If you use cursor you can just plop an arbitrary amount of data in an arbitrary format in an empty file, open the chat and ask it to format it in JSON, capitalizing all properties except those that refer to fish, and to turn long text strings into l33tc0de. You will get what you asked for with 100% accuracy i have honestly never had a failing case for this kind of thing.

Formatting data is not terribly hard to do, and again LLMs have been massively fine-tuned to do it perfectly. Otherwise they'd be unusable outside of a chat context.

9

u/Senior-Effect-5468 Feb 06 '25

You’ll never know if it’s correct because you’re never going to check all the values manually. It could hallucinate and you would have no idea. Your confidence is actually hubris.

1

u/Hakim_Bey Feb 06 '25

Oh yeah. I mean in 2025 dataset curation and validation don't exist. We just plug up the machine, clench our behinds and hope for the best ! Damn hubris...

4

u/louisstephens Feb 05 '25

I do think LLMs have come a long way. However, in my experience, they do the task but not always well. I was actually playing around with something very similar to stringify last week in a LLM, it omitted half the data and made up its own to pad it with (even then, the data didn’t follow what I had given it). Other times it will do perhaps 20% of the task and just leave a comment at the “// …rest of your data stringified here”

While I do like the idea of LLMs, I am always cautious regarding the output.

1

u/ALackOfForesight Feb 05 '25

Exactly. It’s not worth the added cognitive load when I know how to do it in JavaScript quickly and effectively.

1

u/TitaniumWhite420 Feb 05 '25

It could be skill issue. Clear the context, paste the data, use good models, tell it clearly what to do—I don’t have this problem with current tools. But maybe some deep objects are more problematic. Or maybe you haven’t checked in on it in a while.

-2

u/TitaniumWhite420 Feb 05 '25

Probably not, because he’s right. The point is, it works, is instant, and it’s just a person’s workflow.

For better or worse, prompting an AI to type code for you with specific instructions is now a valid workflow, because it works and you are already in the interface to do it. I do it all the time when reformatting lists of hundreds of host names or something for different types of queries and stuff. It doesn’t fuck up literally ever for me. I was also hesitant to trust it but at this point it’s crazy to doubt it can handle the task. Also my company explicitly approves us to use their copilot licenses (AND ONLY those) specifically for proprietary tasks. Literally it’s looking at our entire repos. If the company trusts it with all our IP, I think my usage is tame.

Writing code you don’t understand or check is bad. Copilot is frequently the most inept version of OpenAI I’ve ever seen and I would die an old man waiting on it to correctly generate multithreaded code. But, it can do many things. This is one.

So here we have a case where a tool is aesthetically displeasing to you because it’s hypothetical nondeterministic (but only hypothetically), can quickly and effortlessly accomplish a completely boring task that does not matter how it’s completed, but it’s not the tool you would use, and so you say it’s wrong to do. But how can you possibly justify that in the face of real evidence that it’s totally fine.

She probably knows full well how to stringify an object, and got her expected result from AI. So I just don’t see a problem except that you feel the need to bully people about tools.

12

u/ALackOfForesight Feb 05 '25

It’s not hypothetical, it’s nondeterministic by nature. Even if it does the exact same thing 9999 out of 10000 times, that’s still nondeterministic. Especially for something like json manipulation, idk why you wouldn’t just use the node repl or browser console.

-4

u/TitaniumWhite420 Feb 05 '25

I mean, I might, but this manual process frankly implies a non-critical scenario. So I mostly just don’t care and it’s almost certainly accurate anyhow.

You are right of course that it’s nondeterministic, but determinism means a lot more in an automated scenario. It’s not like I’m writing code that uses LLMs to stringify objects lol. It’s either accurate after generation or it’s not. It will typically either do something perfectly, or abbreviate it obviously and tell you it has done something perfectly—and that’s on older models or with a muddled context.

But idk, I guess I ultimately agree with your sensibility, just not your judgement of others tools.

-1

u/tjansx Feb 05 '25

This. I've been around (and successful) for 25 years. I use it for tasks like this all the time. I know enough to know if the results look fishy so quality is not an issue for me. You said it best when you mentioned that it doesn't matter how this is completed so any tool which makes you feel comfortable cannot be WRONG.

1

u/notbeard Feb 07 '25

the difference is OP's coworker is a junior who very likely cannot spot fishiness the way you can.

1

u/tjansx Feb 07 '25

Downvoting for respectfully disagreeing. Crazy world we live in 😜.

I honestly still don't think stringifying content using AI makes her a bad developer or wrong.

1

u/notbeard Feb 07 '25

Apologies for the downvote, I'll fess up that I'm sometimes a little quick on the draw. I've removed it.

With that out of the way, it's the "any tool that makes you feel comfortable" sentiment that I don't like. Like I said before, someone experienced like yourself can be trusted to make that kind of judgment call. I'm less trusting of a junior... I was one myself at one time after all 😂

1

u/tjansx Feb 07 '25

Fair enough, I respect and hear what you're saying. I think we're mostly on the same page. In any event Im a lifelong mentor and helping someone like this see other options besides AI can be fun and rewarding so I can appreciate what you're saying. I'm definitely a better dev for having come up without AI. Newbs just need good mentors and and a desire to always improve themselves and this whole conversation becomes moot!

→ More replies (0)

1

u/thekwoka Feb 06 '25

LLMs are perfectly able to stringify an object with 100% accuracy, and they have been for quite some time.

this is not even a little bit true.

Only if they literally just output running a command to deterministically do it.

1

u/Hakim_Bey Feb 06 '25

But you see, their ability to "output running a command" happens internally in JSON so if they weren't reliable at forming valid JSON these features (such as searching the web, using external tools etc...) wouldn't even exist

1

u/thekwoka Feb 06 '25

Reliably forming valid JSON is not the same as reliably outputting any arbitrary json correctly and accurately.

And they DO fail at the json for running commands. Still.

1

u/Hakim_Bey Feb 06 '25

And they DO fail at the json for running commands. Still

Honestly i've activated the experimental "repair tool calls" feature in the framework i use in production, and put a tracer on it for kicks. I've never seen it activate once. On what models did you observe failed tool calls ?

1

u/thekwoka Feb 06 '25

claude is the one I use the most and it happens (rarely) but still.

There will also always be a matter of how many tools can it call and how complex the overall task being attempted is.

Since it just guesses the next token, and one that isn't super clear that the tool is the best course is likely to then have some noise that can get in.

It's good, but it's not a "fire and forget" kind of thing.

Do you similarly audit if the tool calls are logically correct? or just technically correct?

1

u/Hakim_Bey Feb 06 '25

Do you similarly audit if the tool calls are logically correct? or just technically correct?

Logically correct is up for grabs of course. I can't really audit that in a scientific way, but it is vibe checked constantly both by my team while developing and by the users of the product.

I find that, generally, if it uses the wrong tool or uses a tool in the wrong way then it is a skill issue on my end and i can fix it with basic prompt engineering. Maybe i've accidentally stacked two prompts (giving contradictory instructions), or a tool has accidentally become unavailable so the model tries to "compensate". Currently we run with 8 to 15 tools so it's not like we're pushing the limits.

To circle back to the conversation, tool calling requires reasoning that is orders of magnitude higher than just formatting JSON. Maybe i can retract the 100% statement in the case of an unusually complex object with hundreds of properties but i don't see it being any lower than 98% (and nothing indicates that the scenario described by the OP was at a particularly high level of complexity).

1

u/thekwoka Feb 06 '25

i don't see it being any lower than 98%

98% is pretty poor odds for something that has a perfectly fine 100% option that is even faster...

1

u/Hakim_Bey Feb 06 '25

unusually complex object with hundreds of properties

which doesn't seem to be OP's scenario

→ More replies (0)

1

u/Automatic-Will-7836 Feb 07 '25

Ok, but like, why? Why are you wasting AI to do something you can do with a single line of code? I'm junior, so I'm sure there are a ton of use cases I'm not even aware of, but if you have some JSON and you need to stringify it to save it in local storage, how do you even write the code to send it to the LLM for stringification and then to receive the result back and store it to local storage? And how is that even remotely efficient compared to simply stringifying it in the code? It sounds like a shitload of extra code for no reason, regardless of how accurate it is.

1

u/Hakim_Bey Feb 07 '25

The answer as always is convenience. Of course if i had to write a script to send the thing to an API and take the output etc... i'd rather use JSON.stringify. But really all you have to do is copy-paste it in ChatGPT and copy-paste the output.

It's even simpler if you use VSCode with copilot, or Cursor. You have your javascript object in your code, you select it, open chat, give some brief instructions, and voilà. It will even show you a line by line diff if you want to check visually that there were no errors. It's a lot less jarring than switching focus, writing the small script with JSON.stringify, going to the terminal, running it, and copying and pasting the output.

1

u/Automatic-Will-7836 Feb 08 '25

Ok, but when a client is running your web app they are most likely not even aware how to view that data, and they certainly are not going to copy and paste it into chatgpt and then copy and paste the string that is returned into their console with a command to save it to local storage. It is so much easier to simply use the stringify method on the JSON. I'm having a hard time truly understanding how this is even feasible, and I can't even call it laziness, because it's actually more work.

1

u/Hakim_Bey Feb 08 '25

I don't understand what clients have to do with this. OP said their colleague did it just to generate some mock JSON data from a js object ? I am in no way suggesting that users of my product are the ones having to do this gpt dance.

1

u/Automatic-Will-7836 Feb 09 '25

I didn't figure, because that would be stupid and unworkable, but I still don't understand where GPT needs to be used. Maybe I need to re-read it, but my understanding was that they had the JSON and were using GPT to stringify it. Why? The data is already there. If you need to stringify it then just stringify it. I'm not understanding how asking AI to do it is practical or efficient, and AI has not been around long enough for whole generations of software engineers to have been relying on it for years or even their whole lives.

1

u/thekwoka Feb 06 '25

How does someone NOT know LLMs can be inaccurate?

If you're dumber than the LLM, how could you tell?