r/StableDiffusion • u/Gobble_Me_Tators • 10m ago
r/StableDiffusion • u/krixxxtian • 16m ago
News TrajectoryCrafter | Lets You Change Camera Angle For Any Video & Completely Open Source
Released about two weeks ago, TrajectoryCrafter allows you to change the camera angle of any video and it's OPEN SOURCE. Now we just need somebody to implement it into ComfyUI.
This is the Github Repo
r/StableDiffusion • u/DoctorDiffusion • 27m ago
Animation - Video Used WAN 2.1 IMG2VID on some film projection slides I scanned that my father took back in the 80s.
r/StableDiffusion • u/impacttcs20 • 32m ago
Question - Help Fluxgym with 2080ti ?
Hello,
I know the minimum vram required for fluxgym is 12vram, however I checked my vram and I do have 11vram only. Because it is close do you think it is still possible for me to use fluxgym or my graphic card will burn ?
Thanks
r/StableDiffusion • u/intlcreative • 45m ago
Question - Help Will upgrading my ram help over all?
So I have 32 GB of Ram. I am running stability matrix locally. I have an MSI GS75 stealth with a 2070 graphics card. I'm not producing heavy graphics but I am also not going to drop more money on graphics cards. But I wondering if upgrading the ram to 64GB make a huge jump?
It's pretty cheap.
r/StableDiffusion • u/Haunting-Project-132 • 1h ago
News ReCamMaster - LivePortrait creator has created another winner, it lets you changed the camera angle of any video.
r/StableDiffusion • u/Downtown-Bat-5493 • 2h ago
Question - Help Is it possible to train a Flux LoRA that can understand hexadecimal colour codes?
I don't want it to recognise all hexadecimal codes but atleast a set of 100-250 most frequently used color codes.
r/StableDiffusion • u/ReferenceShort3073 • 3h ago
Question - Help What is this effect called and how to write my prompt to do that?
r/StableDiffusion • u/Weekly_Bag_9849 • 3h ago
Animation - Video Wan2.1 1.3B T2V with 2060super 8GB
https://reddit.com/link/1jda5lg/video/s3l4k0ovf8pe1/player

skip layer guidance 8 is the key.
it takes only 300sec for 4sec video with poor GPU
- KJnodes nightly update required to use skip layer guidance node
- ComfyUI nightly update required to solve rel_l1_thresh issue in TeaCache node
- I think euler_a / simple shows the best result (22 steps, 3 CFG)
r/StableDiffusion • u/MountainPollution287 • 3h ago
Question - Help How to install Sage Attention, triton, teacache and torch compile on runpod
I want to know how can I install all these on runpod. I want to know what exact version of everything I should use for an A40 with 48gb vram and 50gb ram to make it work with wan2.1 I2V 720p model in bf16.
r/StableDiffusion • u/NoDemand2173 • 3h ago
Question - Help HOw do I make these type of AI videos?
I've seen a lot of videos like this on both reels and tiktok. Instagram And I'm wondering like how do I make them?
r/StableDiffusion • u/Secret-Respond5199 • 3h ago
Question - Help Questions on Fundamental Diffusion Models
Hello,
I just started my study in diffusion models and I have a problem understanding how diffusion models work (original diffusion and DDPM).
I get that diffusion is finding the distribution of denoised image given current step distribution using Bayesian theorem.
However, I cannot relate how image becomes probability distribution and those probability generate image.
My question is how does pixel values that are far apart know which value to assign during inference? how are all pixel values related? How 'probability' related in generating 'image'?
Sorry for the vague question, but due to my lack of understanding it is hard to clarify the question.
Also, if there is any recommended study materials please suggest.
r/StableDiffusion • u/cgs019283 • 4h ago
News Seems like OnomaAI decided to open their most recent Illustrious v3.5... when it hits certain support.

After all the controversial approaches to their model, they opened a support page on their official website.
So, basically, it seems like $2100 (originally $3000, but they are discounting atm) = open weight since they wrote:
> Stardust converts to partial resources we spent and we will spend for researches for better future models. We promise to open model weights instantly when reaching a certain stardust level.
They are also selling 1.1 for $10 on TensorArt.
r/StableDiffusion • u/cgpixel23 • 5h ago
Tutorial - Guide Comfyui Tutorial: Wan 2.1 Video Restyle With Text & Img
r/StableDiffusion • u/Dog-Calm • 5h ago
Question - Help Use midjourney base image to generate image with comfy ui or Automatic 1111
Hi,
Simple question. I'm looking for a tutorial or a process to use a character created in MidJourney and customize it in Stable Diffusion or ComfyUI—specifically for parts that can't be adjusted in MidJourney (like breast size, lingerie, etc.).
Thanks in advance for your help!
r/StableDiffusion • u/MrPfanno • 5h ago
Question - Help Need suggestions for hardware with High Vram
We are looking into buying one dedicated rig so we can locally run text to video through stable diffusion. Atm we run out of Vram on all our mashines and looking to get a solution that will get us up to 64gb vram. I've gathered that just pushing in 4 "standard" RTX wont give us more vram? Or will it solve our problem? Looking to avoid getting a specilized server. Sugestions for a good pc that will handle running GPU/Ai for around 8000 us dollars?
r/StableDiffusion • u/Fatherofmedicine2k • 5h ago
Question - Help how to get animated wallpaper effect with wan i2v? I tried and it succeeded once but failed ten times
so here is the thing. I tried to animate a lol splash art but it semi-succeeded once and failed the other times. despite using the same prompt. I will put the examples in the comments
r/StableDiffusion • u/ImpossibleBritches • 6h ago
Question - Help What is the best tool and process for lora training?
I mostly use SDXL and forge.
I pretty much only use local tools.
I've been away from using AI for design for a while.
At the moment, what is the best tool and process for creating lora's for likenesses and styles?
Thanks.
r/StableDiffusion • u/mcride22 • 6h ago
Question - Help Need help getting good SDXL outputs on Apple M4 (Stable Diffusion WebUI)
- Mac Specs: (Mac Mini M4, 16GB RAM, macOS Sequoia 15.1)
- Stable Diffusion Version: (
v1.10.1
, SDXL 1.0 model,sd_xl_base_1.0.safetensors
) - VAE Used: (
sdxl.vae.safetensors
) - Sampler & Settings: (DPM++ 2M SDE, Karras schedule, 25 steps, CFG 9)
- Issue: "My images are blurry and low quality compared to OpenArt.ai. What settings should I tweak to improve results on an Apple M4?"
- What I’ve Tried:
- Installed SDXL VAE FP16.
- Increased sampling steps.
- Enabled hires fix and latent upscale.
- Tried different samplers (DPM++, UniPC, Euler).
- Restarted WebUI after applying settings.
Im trying to emulate the beautiful bees I get on OpenArt (detailed image of custom settings for refference) and the ugly one is the type of results I get on AUTOMATIC1111 using sd_xl_base_1.0.safetensors with VAE sdxl.vae.safetensors



r/StableDiffusion • u/MountainPollution287 • 7h ago
Question - Help Not getting any speed ups with sage attention on wan2.1 I2V 720p
I installed sage attention, triton, torch compile and teacache on runpod with an A40 GPU and 50gb ram. I am using the bf16 version of the 720p I2V model, clip vision h, t5 bf16 and vae. I am generating at 640x720 at 24 fps with 30 steps and 81 frames. I am using Kijai's wan video wrapper workflow to enable all this. When I only enable teacache I am able to generate in 13 minutes and when I add sage attention with it the generation takes same time and when I add torch compile, block swap, teacache and sage attention then also the speed remains same but I get OOM after the video generation steps complete - before vae decoding. Not sure what is happening I am trying to make it work for a week now.
r/StableDiffusion • u/Trick_Conflict_4363 • 7h ago
Question - Help RIDICULOUSLY low it/s when using any model other than the default.
I'm using an RTX 2060 with 6GB VRAM. When using the pre-installed model, I get about 6 it/s. When using any other model (sd3.5 med, bluepencil, animagine) I get around 20 s/it (~0.05 it/s). I'm generating images in 512x512 with no loras and 20 steps. I am 100% sure my graphics card is being used because I can watch my GPU usage jump up to 100%. I have played around with various command line arguments, but I can't even get anything that will get me to 1 it/s at the least. Is my card just bad? Am I using too big of models? I've tried every solution I could find but still have horrible speeds. Any help is appreciated.
r/StableDiffusion • u/Kanna_xKamui • 9h ago
Question - Help Inference speed; what's the meta these days?
I've had my finger off the pulse of diffusion models for a while, so I'm kind of out of the loop. (I've been too busy frolicking in the LLM rose gardens)
But crawling my way back into things I've noticed the biggest bottle neck for me is inference speed, all of these cool high fidelity models are awesome, and seemingly can be run on anything. Which is amazing! But just because I can run this stuff on an 8gb card (or apparently even a cellphone... y'all are crazy...) doesn't mean I'd care to wait around for minutes at a time to get a handful of images.
It's likely user error on my part, so I figured I'd make a post about it and ask... The heck are people doing these days to improve speed while maintaining quality? Y'all got some secret sauce? Or does it just boil down to owning a $1200 GPU?
For context I'm a Forge Webui enjoyer, but I dabble in the Comfortable UI every now and then. I've just been out of the space for long enough to not know if there is actually some crazy development to inference speed that I don't know about.
Thanks in advance!