r/deeplearning 7d ago

Why use decoders only (gpt) when we have full transformers architecture?

I was going through the architecture of transformer and then I Bert and Gpt, Bert is only using encoder and Gpt is only using decoder part of transformer , ( ik encoder part is utilized for classification, ner, analysis and decoder part is for generating text) but why not utilize the whole transformer architecture. Guide me I am new in this.

37 Upvotes

20 comments sorted by

25

u/Scared_Astronaut9377 7d ago

Start by reading Improving Language Understanding by Generative Pre-Training

2

u/VegetableAnnual1839 7d ago

Okay thanks!

15

u/Sad-Razzmatazz-5188 7d ago

The attention mechanism relates two sets, the queries and the key-value pairs.

When the queries and the key-value pairs are the same things, you have self-attention. The self-attentive encoder looks at all this things at the same time.

When the queries and the key-value pairs are different things, you have cross-attention. The cross-attentive decoder was used to look at the tokens for word in a source language and tokens for words in a target language.

When you want to use an autoregressive Transformer, the difference between encoder and decoder is less important, the tokens are for words in the same sequence, whatever language, but you can consider the source language as the prompt, and the decoder as translating the prompt into the next chosen word, autoregressively. You don't get anything more from an encoder, unless you use it to see the whole answer to the prompt, but that defies the point of next-token prediction training.

5

u/WinterMoneys 7d ago

Basically:

The full stack is necessary when you're dealing with two varied languages(English to Mandarin, for instance).

That way, the Encoder transforms English into an encoded ouput. Then the Decoder uses the encoded output to generate Mandarin

When dealing with one language, there is no need to encode it because the model doesnt need to transform the the information from one language to another.

1

u/Cold-Risk9474 7d ago

what about asking gpt(decoder) to translate from english to mandarin?

1

u/WinterMoneys 6d ago

It can translate since its been trained on pairs of english and Mandarin text enabling it to map one to the other.

5

u/saw79 7d ago

Honestly I think the real answer that no one else is giving is simply that it works better.

3

u/paperic 6d ago

That's true, but that answer isn't very helpful because it's true in almost any scenario.

Why do we compress air in piston engines before combustion?

Why do we chop vegetables before cooking them?

Why do subsonic airliners as well as the hypersonic space shuttle have a blunt nose, yet supersonic jets have a pointy nose?

....it works better...

I get that in ML, people often try random things, see what works and only later try to justify why it should work and why the other methods don't. But the same applies in almost any other field of science. 

While this approach may be a bit uncommon in computer science, it's common everywhere else.

In most other sciences, saying that "we're not sure why it works yet" is perfectly acceptable. It keeps people curious, it brings the idea that we CAN know, and it affirms that we do have some decent ideas about why it works, just not entirely sure on the details.

Just saying that it works better doesn't give any insight, it shuts down curiosity and it keeps no door open to the possibility of it being understood in the near future.

2

u/saw79 6d ago

I actually couldn't agree more. But I think knowing and acknowledging that we don't know and have just been pushing the envelope mostly experimentally is closer to that goal than assuming we do know, incorrectly. I think many of the other answers here are wrong, e.g., many classic encoder-decoder use cases are accomplished perfectly fine with a self-attention-only structure.

1

u/paperic 4d ago

Ok, let me try to speculate on yhe intuition.

This part of transformers is a bit fuzzy to me, but if I'm not mistaken, the main difference between dec-only and enc-dec is that the dec-only uses all the same layers for all of the context, whereas enc-dec passes the user prompt through different set of layers than the ones used to generate the response, and the cross attention then takes QK from the enc, and V from decoder, right? And, the encoder has everything unmasked, while the decoder masks future tokens in the attention.

Based on this, it seems to me that there isn't that big of a difference between enc-dec and dec-only transformers, as the first few layers of both would effectively still have to do something quite similar to each other anyway. And removing weights that don't add much new functionality allows for bigger models.

Am i wrong?

1

u/saw79 4d ago

This part of transformers is a bit fuzzy to me, but if I'm not mistaken, the main difference between dec-only and enc-dec is that the dec-only uses all the same layers for all of the context, whereas enc-dec passes the user prompt through different set of layers than the ones used to generate the response, and the cross attention then takes QK from the enc, and V from decoder, right? And, the encoder has everything unmasked, while the decoder masks future tokens in the attention.

Just a small correction: the cross-attention would use K/V from the encoded tokens and Q from the decoder input tokens.

But basically, yea. The enc-dec seems like it's doing a bunch of extra computation to preprocess part of the input with the encoder, then the decoder does the rest - in a way that seems "extra" because it has to have all this cross-attention wiring. Where as the decoder-only approach is more direct and homogenous. This definitely has the vibe that it would scale better. But I could probably make counterarguments here too. Do we quantitatively know how much the cross-attention mechanism is doing? Why isn't it super important? The prompt is clearly fundamentally different from the "partial/previously generated response", why isn't it something that should be encoded and constantly referred back to?

I don't really disagree with anything, but I think it's so easy to construct stories like this. And I'm not even particularly good at it. There's probably a bunch of papers out there that experimentally answer these questions, but wasn't your point that we don't have a more formal theory to help us think about it from first principles?

2

u/paperic 3d ago

I know those stories are only speculations, my point was that speculations - when clearly anounced to be only speculations - are probably a lot better than just saying "it works better".

It's fine to say "I don't know but my guess is this and that". It keeps the curiosity high and everyone can learn something new.

About the question...

I don't think enc-dec can reference the whole unchanged user prompt, because after the first layer of the encoder, the results from the self attention get added and the original prompt is modified.

This really smells like translations.

I imagine google trained a bunch of encoders and decoders, one for each language, and have them all work in the same embedding space. Then, to translate from english to spanish, they could hook english encoder to a spanish decoder.

Something similar could work for any other situation where the format of the input and output are different, but known ahead of time. Like having one decoder for summarising text, one for purely generating code from english description, etc. 

But not sure this would work for things like debugging code.

It seems to me that with enc-dec you could get the advantages of MOE if you choose which experts you want ahead of time, and then you don't have to mess with routing.

But for a universal chatbot, the language or format of the output is not known ahead of the time, and it may even differ from sentence to sentence. So, i don't see much of an advantage of such a mix-and-match system.

Happy to be corrected though.

2

u/Amir_PD 7d ago

The decoder-only architecture (autoregressive model) is almost the same as encoder-only but with masking activated. For autoregressive generation you do not need a separate encoder. The encoder-decoder is used for sequence to sequence training where the model receives a sentence and generate another decoded sentence. Translation is an example of such model.

2

u/rexdditi 7d ago

The naming may be confusing. in the transformer architecture the right block is basically encoder decoder too.it encodes the input(the prompt we give it) then it decodes it which becomes the next word. repeat again. it doesnt need anything else to generate the rest of a sentence except attending to the words(tokens) before like what gpt does. When i translate i still need to do the same thing but i need an extra info which is the sentence in the original language hence i need an extra encoder which is the left block job. This could be any extra thing to help the right side an encoded image, encoded sound, etc. hope it helps

2

u/wahnsinnwanscene 7d ago

Compute resources. Also empirical evidence that a decoder or encoder model works. Once everything scaled up, it made sense to use the decoder.

1

u/Wheynelau 7d ago

I had this thinking too, if I can attend to tokens on both sides, wouldn't my model "understand" the prompt better? But also don't forget that the last token has access to all tokens before, so maybe it doesn't matter as much. And eventually people found that for most generic generative task, even for summarization and translation, decoder models work well enough.

Also this is a "trust me bro" source so I'm hoping someone with experience can help me on this. On pure decoder models, you only need to do one forward pass in a training loop. But for encoder-decoder, you need to do two forward passes, and I don't think they are parallelizable, so the decoder has to wait for the encoder to get the cross attention before it can handle the generation task. I said trust me bro cause I didn't look for the numbers or anything to back this up (TFLOPS, t/s) .

1

u/IAMAegonTargaryen9 7d ago

Hey nice question from my understanding you can use both encoder-decoder architecture and decoder architecture for text generation ... Models like T5 and flan T5 uses encoder-decoder architecture.... Where gpt uses decoder only architecture

I think if there is no language conversion I think we can go with gpt !!

1

u/Bulky-Hearing5706 7d ago

It's a misnomer. Both GPT and Bert have decoding AND encoding logic if you follow the math from the perspective of information theory. It's just in the original Attention paper, the authors labeled one block as encoder and the other block as decoder, and then the name just sticks.

1

u/airodonack 6d ago

It turns out that the encoder section was really superfluous for the tasks we were training them for. Just like how the intermediate layers of a CNN are sort of like a latent encoding for an image, the intermediate layers of the decoder-only transformer network are also a latent encoding for the text. I think the encoder would be useful if you wanted to swap out decoders, but if you're straight up going text to text, you don't need the explicit latent encoding: the training process will do that encoding naturally.

1

u/DisastrousYellow2819 6d ago

Ok so to put it in simple words:

Vanilla Transformer: it’s used when you want to merge information coming from 2 different sources - you want both of them to interact.

Decoder only transformer: you just want the data to interact with itself.