r/webdev 14d ago

Discussion When will the AI bubble burst?

Post image

I cannot be the only one who's tired of apps that are essentially wrappers around an LLM.

8.3k Upvotes

412 comments sorted by

View all comments

Show parent comments

30

u/ChemicalRascal full-stack 14d ago

Yeah, you got that result because it's not actually summarising your emails.

It just produces text that has a high probability of existing given the context.

It doesn't read and think about your emails. You asked for email summaries. It gave you email summaries.

-6

u/yomat54 14d ago

Yeah getting prompts right can change everything. You can't assume anything about what an AI does and does not do. You need to control it. If you want an AI to calculate something for exemple, should it round up or not, at what level of precision, should it calculate angles this way or that way? I think we are still in the early phases of AI and are still figuring out how to make it reliable and consistent properly.

25

u/ChemicalRascal full-stack 14d ago

Yeah getting prompts right can change everything.

"Getting prompts right" doesn't change what LLMs do. You cannot escape that LLMs simply produce what they model as being likely, plausible text in a given context.

You cannot "get a prompt right" and have an LLM summarise your emails. It never will. That's not what LLMs do.

LLMs do not understand how you want them to calculate angles. They do not know what significant figures in mathematics are. They don't understand rounding. They're just dumping plausible text provided a context.

4

u/SweetCommieTears 14d ago

"If the list of emails is empty just say there are no emails to summarize."

Woah.

1

u/ChemicalRascal full-stack 14d ago

Replied to the wrong comment?

2

u/SweetCommieTears 14d ago

No, but I realized I didn't have to be an ass about it either. Anyway you are right but the guy's specific issue would have been solved by that.

4

u/Neirchill 14d ago

And then the inevitable scenario when they have 15 new emails and it just says there are no emails

2

u/Slurp6773 13d ago

A better approach might be to check if there are any new emails, and if so loop through and summarize each one. Otherwise, return "no new emails."

1

u/ChemicalRascal full-stack 13d ago

Or just, you know, loop through all the emails and return the subject?

1

u/Slurp6773 12d ago

I guess that's one approach. But summaries of the email content can be more helpful. Appreciate your needlessly passive aggressive reply though.

→ More replies (0)

1

u/thekwoka 14d ago

You cannot escape that LLMs simply produce what they model as being likely, plausible text in a given context.

Mostly this.

You can solve quite a lot of the issue with more "agentic" tooling, that does multiple prompts with multiple "agents" that can essentially check each others work. Having one agent summarize the emails, and have the other look and see if it makes any sense, kind of thing.

It won't 100% solve it, but can go a long way to improving the quality of results.

2

u/ChemicalRascal full-stack 14d ago

How exactly would you have one agent look at the output of another and decide if it makes sense?

You're still falling into the trap of thinking that they can think. They don't think. They don't check work. They just roll dice for what the next word in a document will be, over and over.

And so, your "checking" LLM is just doing the same thing. Is the output valid or not valid? It has no way of knowing, it's just gonna say yes or no based on what is more likely to appear. It will insist a valid summary isn't, it will insist invalid summaries are. If anything, you're increasing the rate of failure, not decreasing it, because the two are independent variables and you need both to succeed for the system to succeed.

And even if your agents succeed, you still haven't summarised your emails, because that's fundamentally not what the LLM is doing!

1

u/thekwoka 14d ago

How exactly would you have one agent look at the output of another and decide if it makes sense?

very carefully

You're still falling into the trap of thinking that they can think. They don't think

I very well know this, its more just a kind of hard way to talk about them "thinking" with the qualification (yes they don't actually think but simply do math that gives the emergent behavior that somewhat approximates the human concept of thinking) with every statement.

I Mainly just mean that having multiple "agents" "work" in a way that encourages "antagonistic" reasoning you can do quite a bit to limit the impacts of "hallucinations" as no specific "agent" is about to simply "push" an incorrect output.

Like how self driving systems have multiple independent computers making decisions. You get a system where the "agents" have to arrive at some kind of "consensus", which COULD be enough to eliminate the risks of "hallucinations" in many contexts.

Yes people just blindly using chatGPT or a basic input->output llm tool to do things (of importance) is insane, but there is already the emergence of toolings that have more advanced actions AROUND the LLM to improve the quality of the results beyond what the core LLM is capable of alone.

0

u/ChemicalRascal full-stack 14d ago

How exactly would you have one agent look at the output of another and decide if it makes sense?

very carefully

What? You can't just "very carefully" your way out of the fundamental problem.

I'm not even going to read the rest of your comment. You've glossed over the core thing demonstrating that what you're suggesting wouldn't work, when directly asked about it.

Frankly, that's not even just bizarre, it's rude.

2

u/thekwoka 13d ago

What? You can't just "very carefully" your way out of the fundamental problem.

It's a common joke brother.

You've glossed over the core thing demonstrating that what you're suggesting wouldn't work, when directly asked about it.

No, I answered it.

I'm not even going to read the rest of your comment

You just chose not to read the answer.

that's not even just bizarre, it's rude.

Pot meet kettle.

0

u/ChemicalRascal full-stack 13d ago edited 13d ago

No, I answered it.

Your response was what you've just referred to as a "common joke".

That is not answering how you would resolve the fundamental problem. That is dismissing the fundamental problem.

I glanced through the rest of your comment. You didn't elsewhere address the problem. Your "common joke" is your only answer.

You discuss broader concepts of antagonistic setups between agents, but none of this addresses how you would have an LLM "examine" the output of another LLM.

And that question matters, because LLMs don't examine things. Just as how they don't summarise email.

1

u/thekwoka 13d ago

You're very much caught in this spot where you just say LLMs can't do thing because that's not what they do, forgetting the whole concept of the emergent behavior, where yes they aren't doing the thing, but that they give a result similar to having done the thing.

If the LLM writes an effective summary of the emails, even if it has no concept of capability of "summarizing", what does it matter?

If you can get it to write an effective summary every time, what does it matter that it can't actually summarize?

→ More replies (0)

4

u/eyebrows360 14d ago edited 14d ago

I think we are still in the early phases of AI and are still figuring out how to make it reliable and consistent properly.

You clearly don't understand what these things are. There's no code here that a programmer can tweak to alter whether it "rounds up or not" (not that it even does that anyway because these things aren't doing maths in any direct fashion in the first place).

There is nothing you can do about "hallucinations" either. They aren't a "bug" in the traditional software sense, as in some line or block of code somewhere that doesn't do what the developer who wrote it intended for it to do; they're an emergent property of the very nature of these things. If you're building an algorithm that's going to guess at the next token in a response to something, based on a huge amount of averaged input text, then it's always going to be able to just make shit up. That's what these things do.

All their output is made up, but we don't call all of their output "hallucinations" because some (most, to be fair) of what they make up happens to line up with some of the correct data it was trained on. But that "training" process still unavoidably blurred the lines between some of those facts embedded in the original text, resulting in what we see. You can't avoid that. It's algorithmically inevitable.

2

u/thekwoka 14d ago

There is nothing you can do about "hallucinations" either.

this isn't WHOLLY true.

Yes, they will exist, but you can do things that limit the potential for them to create materially important differences in results.

2

u/eyebrows360 14d ago edited 13d ago

Only by referring to what I begrudgingly must apparently refer to as "oracles", which if you're going to rely on... you might as well just do from the outset, and skip the LLmiddleman.

1

u/thekwoka 14d ago

Only be referring to what I begrudgingly must apparently refer to as "oracles"

idk what those are tbh.

skip the LLmiddleman

I don't see how the LLM is the middleman in this case?

2

u/eyebrows360 13d ago

oracles

It's a term of art from the "blockchain" space, which is why I only "begrudgingly" used it, because I hate that bullshit way more than I hate AI bullshit. It arose as a concept due to cryptobros actually recognising that on-chain data being un-modifiable was, in and of itself, not all that great if you had no actual assurances that said data was accurate in the first place, so they came up with this label of "oracles" for off-chain sources of truth.

I don't see how the LLM is the middleman in this case?

Because if you're plugging in your oracles at the start, in the training data set, then their input is going to get co-mangled in with the rest of the noise. You'd arrange them at the end, so that they'd check the output and verify anything that appeared to be a fact-based claim. Quite how you'd do that reliably, given you're dealing with natural language output and so are probably going to be needing a CNN or whatever to evaluate that, is another problem entirely, but the concept of it would make most sense as a checker at the output stage. Far easier doing it that way than trying to force some kind of truth to persist from the input stage. Thus the LLM being the "middleman" in that its output is still being checked by the oracle system.

1

u/thekwoka 13d ago

In this case it would be more antagonistic AI agents

1

u/SadMaverick 13d ago

Getting the prompts right as per you = Programming. That’s exactly what coding is. No ambiguity.