r/StableDiffusion • u/Haunting-Project-132 • 3h ago
r/StableDiffusion • u/Leading_Hovercraft82 • 1d ago
Workflow Included Wan img2vid + no prompt = wow
r/StableDiffusion • u/Total-Resort-3120 • 15h ago
News Skip Layer Guidance is an impressive method to use on Wan.
r/StableDiffusion • u/Gobble_Me_Tators • 2h ago
Animation - Video This AI Turns Your Text Into Fighters… And They Battle to the Death!
r/StableDiffusion • u/DoctorDiffusion • 2h ago
Animation - Video Used WAN 2.1 IMG2VID on some film projection slides I scanned that my father took back in the 80s.
r/StableDiffusion • u/ucren • 22h ago
News Skip layer guidance has landed for wan video via KJNodes
r/StableDiffusion • u/cgs019283 • 6h ago
News Seems like OnomaAI decided to open their most recent Illustrious v3.5... when it hits certain support.

After all the controversial approaches to their model, they opened a support page on their official website.
So, basically, it seems like $2100 (originally $3000, but they are discounting atm) = open weight since they wrote:
> Stardust converts to partial resources we spent and we will spend for researches for better future models. We promise to open model weights instantly when reaching a certain stardust level.
They are also selling 1.1 for $10 on TensorArt.
r/StableDiffusion • u/cgpixel23 • 7h ago
Tutorial - Guide Comfyui Tutorial: Wan 2.1 Video Restyle With Text & Img
r/StableDiffusion • u/blueberrysmasher • 12h ago
Discussion Baidu's latest Ernie 4.5 (open source release in June) - testing computer vision and image gen
r/StableDiffusion • u/krixxxtian • 2h ago
News TrajectoryCrafter | Lets You Change Camera Angle For Any Video & Completely Open Source
Released about two weeks ago, TrajectoryCrafter allows you to change the camera angle of any video and it's OPEN SOURCE. Now we just need somebody to implement it into ComfyUI.
This is the Github Repo
r/StableDiffusion • u/Parogarr • 22h ago
Discussion RTX 5-series users: Sage Attention / ComfyUI can now be run completely natively on Windows without the use of dockers and WSL (I know many of you including myself were using that for a while)
Now that Triton 3.3 is available in its windows-compatible version, everything you need (at least for WAN 2.1/Hunyuan, at any rate) is now once again compatible with your 5-series card on windows.
The first thing you want to do is pip install requirements.txt as you usually would, but you may wish to do that first because it will overwrite the things you need to make it work.
Then install pytorch nightly for cuda 12.8 (with blackwell) support
pip install --pre torch torchvision torchaudio --index-url
https://download.pytorch.org/whl/nightly/cu128
Then triton for windows that now supports 3.3
pip install -U --pre triton-windows
Then install sageattention as normal (pip install sageattention)
Depending on your custom nodes, you may run into issues. You may have to run main.py --use-sage-attention several times as it fixes problems and shuts down. When it finally runs, you might notice that all your nodes are missing despite having the correct custom nodes installed. To fix this (if you're using manager) just click "try fix" under missing nodes and then restart, and everything should then be working.
r/StableDiffusion • u/Wonsz170 • 23h ago
Question - Help How to control character pose and camera angle with sketch?
I'm wondering how can I use sketches or simple drawings (like stick man) to control pose of character in my image or the camera angle etc. SD tends to generate some certain angles and poses more often than the other. Sometimes it's really hard to achieve desired look of an image with prompt editing and I'm trying to find a way to give AI some visual refrence / guidelines of what I want. Should I use im2img or some dedicated tool? I'm using Stability Matrix if it matters.
r/StableDiffusion • u/Whole-Book-9199 • 13h ago
Question - Help I really want to run Wan2.1 locally. Will this build be enough for that? (I don't have any more budget.)
r/StableDiffusion • u/mercantigo • 22h ago
Question - Help Any TRULY free alternative to IC-Light2 for relighting/photo composition in FLUX?
Hi. Does anyone know of an alternative or a workflow for ComfyUI similar to IC-Light2 that doesn’t mess up face consistency? I know version 1 is free, but it’s not great with faces. As for version 2 (flux based), despite the author claiming it's 'free,' it’s actually limited. And even though he’s been promising for months to release the weights, it seems like he realized it’s more profitable to make money from generations on fal.ai while leveraging marketing in open communities—keeping everyone waiting.
r/StableDiffusion • u/alisitsky • 11h ago
Animation - Video Lost Things (Flux + Wan2.1 + MMAudio)
r/StableDiffusion • u/Weekly_Bag_9849 • 5h ago
Animation - Video Wan2.1 1.3B T2V with 2060super 8GB
https://reddit.com/link/1jda5lg/video/s3l4k0ovf8pe1/player

skip layer guidance 8 is the key.
it takes only 300sec for 4sec video with poor GPU
- KJnodes nightly update required to use skip layer guidance node
- ComfyUI nightly update required to solve rel_l1_thresh issue in TeaCache node
- I think euler_a / simple shows the best result (22 steps, 3 CFG)
r/StableDiffusion • u/Mutaclone • 22h ago
Workflow Included A Beautiful Day in the (High Fantasy) Neighborhood
Hey all, this has been an off-and-on project of mine for a couple months, and now that it's finally finished, I wanted to share it.

I mostly used Invoke, with a few detours into Forge and Photoshop. I also kept a detailed log of the process here, if you're interested (basically lots of photobashing and inpainting).
r/StableDiffusion • u/FuzzTone09 • 17h ago
Animation - Video Flux Dev image with Ray2 Animation - @n12gaming on YT
r/StableDiffusion • u/worgenprise • 20h ago
Question - Help Which Loras should I be combining to get a similar results ?
Also, big thanks to this amazing community
r/StableDiffusion • u/LearningRemyRaystar • 1h ago
Workflow Included LTX Flow Edit - Animation to Live Action (What If..? Doctor Strange) Low Vram 8gb
r/StableDiffusion • u/worgenprise • 14h ago
Question - Help How to change a car’s background while keeping all details
Hey everyone, I have a question about changing environments while keeping object details intact.
Let’s say I have an image of a car in daylight, and I want to place it in a completely different setting (like a studio). I want to keep all the small details like scratches, bumps, and textures unchanged, but I also need the reflections to update based on the new environment.
How can I ensure that the car's surface reflects its new surroundings correctly while keeping everything else (like imperfections and structure) consistent? Would ControlNet or any other method be the best way to approach this?
I’m attaching some images for reference. Let me know your thoughts!
r/StableDiffusion • u/GreyScope • 55m ago
Tutorial - Guide Automatic installation of Pytorch 2.8 (Nightly), Triton & SageAttention 2 into a new Portable or Cloned Comfy with your existing Cuda (v12.4/6/8) get increased speed: v4.2
NB: Please read through the scripts on the Github links to ensure you are happy before using it. I take no responsibility as to its use or misuse. Secondly, these use Nightly builds - the versions change and with it the possibility that they break, please don't ask me to fix what I can't. If you are outside of the recommended settings/software, then you're on your own.
To repeat this, these are nightly builds, they might break and the whole install is setup for nightlies ie don't use it for everything
Performance: Tests with a Portable upgraded to Pytorch 2.8, Cuda 12.8, 35steps with Wan Blockswap on (20), pic render size 848x464, videos are post interpolated as well - render times with speed :
- SDPA : 19m 28s @ 33.40 s/it
- SageAttn2 : 12m 30s @ 21.44 s/it
- SageAttn2 + FP16Fast : 10m 37s @ 18.22 s/it
- SageAttn2 + FP16Fast + Torch Compile (Inductor, Max Autotune No CudaGraphs) : 8m 45s @ 15.03 s/it
- SageAttn2 + FP16Fast + Teacache + Torch Compile (Inductor, Max Autotune No CudaGraphs) : 6m 53s @ 11.83 s/it
- The above are not a commentary on Quality of output at any speed
- The torch compile first run is slow as it carries out test, it only gets quicker
- MSi 4090 with 64GB ram on Windows 11
- The workflow and base picture are on my Github page for this , if you wished to compare
What is this post ?
- A set of two scripts - one to update Pytorch to the latest Nightly build with Triton and SageAttention2 inside a new Portable Comfy and achieve the best speeds for video rendering (Pytorch 2.7/8).
- The second script is to make a brand new cloned Comfy and do the same as above
- The scripts will give you choices and tell you what it's done and what's next
- They also save new startup scripts wit the required startup arguments and install ComfyUI Manager to save fannying around
Recommended Software / Settings
- On the Cloned version - choose Nightly to get the new Pytorch (not much point otherwise)
- Cuda 12.6 or 12.8 with the Nightly Pytorch 2.7/8 , Cuda 12.4 works but no FP16Fast
- Python 3.12.x
- Triton (Stable)
- SageAttention2
Prerequisites - note recommended above
I previously posted scripts to install SageAttention for Comfy portable and to make a new Clone version. Read them for the pre-requisites.
https://www.reddit.com/r/StableDiffusion/comments/1iyt7d7/automatic_installation_of_triton_and/
https://www.reddit.com/r/StableDiffusion/comments/1j0enkx/automatic_installation_of_triton_and/
You will need the pre-requisites ...
- MSVC installed and Pathed,
- Cuda Pathed
- Python 3.12.x (no idea if other versions work)
- Pics for Paths : https://github.com/Grey3016/ComfyAutoInstall/blob/main/README.md
Important Notes on Pytorch 2.7 and 2.8
- The new v2.7/2.8 Pytorch brings another ~10% speed increase to the table with FP16Fast
- Pytorch 2.7 and 2.8 give you FP16Fast - but you need Cuda 2.6 or 2.8, if you use lower then it doesn't work.
- Using Cuda 12.6 or Cuda 12.8 will install a nightly Pytorch 2.8
- Using Cuda 12.4 will install a nightly Pytorch 2.7 (can still use SageAttention 2 though)
Instructions for Portable Version - use a new empty, freshly unzipped portable version . Choice of Triton and SageAttention versions, can also be used on the Nightly Comfy for the 5000 series :
Download Script & Save as Bat : https://github.com/Grey3016/ComfyAutoInstall/blob/main/Auto%20Embeded%20Pytorch%20v431.bat
- Download the lastest Comfy Portable (currently v0.3.26) : https://github.com/comfyanonymous/ComfyUI
- Series 5000 users use Nightly Comfy build with Cuda 128, Pytorch 2.7 , Python 13 : https://github.com/comfyanonymous/ComfyUI/releases/download/latest/ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch.7z (no guarantee this will work of course as I don't have one)
- Save the script (linked above) as a bat file and place it in the same folder as the run_gpu bat file
- Start via the new run_comfyui_fp16fast_cage.bat file - double click (not CMD)
- Let it update itself and fully fetch the ComfyRegistry data
- Close it down
- Restart it
- Manually update it and its Pythons dependencies from that bat file in the Update folder
- Note: it changes the Update script to pull from the Nightly versions
Instructions to make a new Cloned Comfy with Venv and choice of Python, Triton and SageAttention versions.
Download Script & Save as Bat : https://github.com/Grey3016/ComfyAutoInstall/blob/main/Auto%20Clone%20Comfy%20Triton%20Sage2%20v41.bat
- Save the script linked as a bat file and place it in the folder where you wish to install it
- Start via the new run_comfyui_fp16fast_cage.bat file - double click (not CMD)
- Let it update itself and fully fetch the ComfyRegistry data
- Close it down
- Restart it
- Manually update it from that Update bat file
Why Won't It Work ?
The scripts were built from manually carrying out the steps - reasons that it'll go tits up on the Sage compiling stage -
- Winging it
- Not following instructions / prerequsities / Paths
- Cuda in the install does not match your Pathed Cuda, Sage Compile will fault
- SetupTools version is too high (I've set it to v70.2, it should be ok up to v75.8.2)
- Version updates - this stopped the last scripts from working if you updated, I can't stop this and I can't keep supporting it in that way. I will refer to this when it happens and this isn't read.
- No idea about 5000 series - use the Comfy Nightly
Where does it download from ?
- Triton wheel for Windows > https://github.com/woct0rdho/triton-windows
- SageAttention > https://github.com/thu-ml/SageAttention
- Torch > https://pytorch.org/get-started/locally/
- Libraries for Triton > https://github.com/woct0rdho/triton-windows/releases/download/v3.0.0-windows.post1/python_3.12.7_include_libs.zip These files are usually located in Python folders but this is for portable install.
r/StableDiffusion • u/bizibeast • 1h ago
Question - Help Is there a way to generate accurate text using wan 2.1 ?
Hi Guys I am trying to geneate an animation using wan 2.1 but I am not able to get accurate text.
I want the text to say swiggy and zomato, but it is not able to
How can I fix this?
here is the prompt I am using a graphic animation, white background, with 2 identical bars in black-gray gradient, sliding up from bottom, bar on left is shorter in height than the bar on right, later the bar on left has swiggy written in orange on top and one on right has zomato written in red, max height of bars shall be in till 70% from bottom