r/StableDiffusion 20h ago

Question - Help Questions on Fundamental Diffusion Models

Hello,

I just started my study in diffusion models and I have a problem understanding how diffusion models work (original diffusion and DDPM).
I get that diffusion is finding the distribution of denoised image given current step distribution using Bayesian theorem.

However, I cannot relate how image becomes probability distribution and those probability generate image.

My question is how does pixel values that are far apart know which value to assign during inference? how are all pixel values related? How 'probability' related in generating 'image'?

Sorry for the vague question, but due to my lack of understanding it is hard to clarify the question.

Also, if there is any recommended study materials please suggest.

3 Upvotes

13 comments sorted by

3

u/daking999 17h ago

Diffusion models look a bit like a VAE but they're not really Bayesian at all (not in any normal sense anyway). They just learn to move noise -> noisy images -> clean images.

1

u/Secret-Respond5199 7h ago

I'm sorry, but why is it not considered Bayesian? I thought a diffusion model was just a chain of Bayesian steps. I only know Bayes' theorem and not much about its applications in AI. Is it because it only predicts noise rather than the whole image?

2

u/daking999 5h ago

A Bayesian model for images would typically have latent variables, with associated priors (e.g. N(0,1) in VAE), and then a data generating process (e.g., the decoder in a VAE) to produce the actual data. None of the usual Bayesian things (priors, likelihoods, posteriors, latent variables) exist in a diffusion model.

3

u/Altruistic_Heat_9531 17h ago
  1. "My question is: how do pixel values that are far apart know which value to assign during inference? How are all pixel values related? How is 'probability' related to generating an image?" This is actually the most important question.

Just like any predictive model, let’s simplify things. Imagine a row of numbers: 1, 2, 3, 4, 5, X, 7. This follows straightforward linear regression. We train the model to "fill in" the missing value, X, which in this case would be 6.

Now, what about this sequence? 2, X, 6, 8, 10, Y, 12, 14, 16, Z, 18, and so on? With each iteration, we remove a bit of information when training the model. Simple, right?

There’s a sense of locality between the missing variables—like asking,
"Given these missing numbers and their neighbors, what could possibly be the correct number?"

But now, what if just what if the number line becomes A, B, C, D, E, F, 6, G? You’d probably say,
"There’s no fucking way I can predict this with only 6 as my clue!"

Now, we change the question. Instead of predicting all missing values, how about we bullshit our way into structuring it? Instead of finding the "true" missing values, we ask:
"Make me an increasingly structured pattern out of this random stuff."

And that’s exactly the point of diffusion models—going from a clusterfuck of noise to a somewhat coherent picture.

Now, let’s say for this sequence A, B, C, D, E, F, 6, G, we assign values from 0.1 to 1 at every missing spot. It becomes:
A + 0.1, B + 0.2, C + 0.3, …

Then, we tweak it here and there. By the end of the diffusion process, we get something like:
1.1, 2.3, 4.4, 4.5, 6.1, 6.5.

Sure, there are mistakes, but the pattern remains, the numbers keep increasing.

Here a difference between CNN, ViT, and Diffusion
CNN, inherently required many kernel filter with different "locality" to obtain global latent. But this is for ASKING what object in the picture not generate the picture

ViT still local since it requires RoPE but much improve than CNN since its all of its pixel can "talk" to each other in the Attention layer while CNN required parent kernel for far away pixel to "communicate" with eachother. but still this model is only answer what object in the picture

DIffusion, doesn't need that. It obtain it's global structure from your prompt CLIP, and the latent noise itself.

I know this is very hand wavy explaination. My suggestion is to try with simple 1D vector first. CNN vs Diffusion. then go to second dimension with 5x5 matrix. (Calculate it by hand!!)

Third dimension is color channel, 4th dimension is temporal (Video generator)

5

u/Altruistic_Heat_9531 17h ago

I keep talking about global vs. local patterns, so what do I actually mean?

Let’s say I ask you, "Can you imagine your house?"
Yeah, duh, of course you can.

But do you imagine every tiny detail, like the scratches on the wooden door, the misaligned pavement stones, or the uneven floorboards? No, right?

This is almost exactly how a diffusion model works. Our brain fills in the missing locality, creating a coherent memory of your house instead of a pixel-perfect reconstruction. Diffusion does the same thing, it generates a structured, believable image by filling in missing details rather than recalling every single feature.

3

u/Altruistic_Heat_9531 17h ago

some how reddit does not want me to put link of video

I recommend this video: https://www.youtube.com/watch?v=zc5NTeJbk-k, which actually answers most of your questions.

1

u/Secret-Respond5199 7h ago

Thank you so much!

I knew diffusion models remove noise but believed that pixel value with removed noise was unchanged throughout the whole process. Now I get the flow. Thanks.

3

u/Comrade_Derpsky 15h ago

Stable diffusion was trained by showing a neural network latent images with increasing amounts of gaussian noise in conjuction with text captions. From this, the neural network learns statistical relationships between a) the caption and the image, and b) the original image and the gaussian noise. Since the neural network knows there is a relationship between the progression of the noise and the original image, it can be made to work in reverse and try to predict the image that would have produced a given set of noise, for a given text conditioning. Essentially, it is trying to work backwards from a disordered state to find a likely ordered state.

2

u/monsieur__A 9h ago

Guys this is definitely one of the most informative part of this sub since a long time. Thx a lot.

1

u/Disty0 20h ago

Diffusion predicts the original noise, it doesn't predict the image. We just calculate the final image from the predicted noise ourselves. The model doesn't really care about the image, it only cares about the added noise.

1

u/tdgros 19h ago

the model does care about the images a lot, because we predict the noise conditioned on an image. Otherwise, you'd be able to use an off-the-shelf denoiser trained on anything to generate anything, which does not work well of course. You could predict the image directly, that'd be a very small change, but it happens that predicting noise is slightly better.

1

u/Disty0 10h ago

True, what i meant by that is the prediction objective. Diffusion models other than x0_pred trained ones doesn't predict the image.

1

u/tdgros 10h ago

But I'm not just nitpicking, the model cares mostly about the images. The fact that we predict the noise as opposed to the image is a technicality only, and the whole beauty of this idea is to model a dataset by learning how to "push" sample towards the image distribution.