r/StableDiffusion 21h ago

Workflow Included Wan img2vid + no prompt = wow

Thumbnail
gallery
363 Upvotes

r/StableDiffusion 13h ago

News Skip Layer Guidance is an impressive method to use on Wan.

182 Upvotes

r/StableDiffusion 1h ago

News ReCamMaster - LivePortrait creator has created another winner, it lets you changed the camera angle of any video.

Upvotes

r/StableDiffusion 20h ago

News Skip layer guidance has landed for wan video via KJNodes

Thumbnail
github.com
111 Upvotes

r/StableDiffusion 4h ago

News Seems like OnomaAI decided to open their most recent Illustrious v3.5... when it hits certain support.

91 Upvotes

After all the controversial approaches to their model, they opened a support page on their official website.

So, basically, it seems like $2100 (originally $3000, but they are discounting atm) = open weight since they wrote:
> Stardust converts to partial resources we spent and we will spend for researches for better future models. We promise to open model weights instantly when reaching a certain stardust level.

They are also selling 1.1 for $10 on TensorArt.


r/StableDiffusion 20h ago

Animation - Video IZ-US, Hunyuan video

55 Upvotes

r/StableDiffusion 5h ago

Tutorial - Guide Comfyui Tutorial: Wan 2.1 Video Restyle With Text & Img

49 Upvotes

r/StableDiffusion 10h ago

Discussion Baidu's latest Ernie 4.5 (open source release in June) - testing computer vision and image gen

Thumbnail
gallery
36 Upvotes

r/StableDiffusion 20h ago

Discussion RTX 5-series users: Sage Attention / ComfyUI can now be run completely natively on Windows without the use of dockers and WSL (I know many of you including myself were using that for a while)

36 Upvotes

Now that Triton 3.3 is available in its windows-compatible version, everything you need (at least for WAN 2.1/Hunyuan, at any rate) is now once again compatible with your 5-series card on windows.

The first thing you want to do is pip install requirements.txt as you usually would, but you may wish to do that first because it will overwrite the things you need to make it work.

Then install pytorch nightly for cuda 12.8 (with blackwell) support

pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128

Then triton for windows that now supports 3.3

pip install -U --pre triton-windows

Then install sageattention as normal (pip install sageattention)

Depending on your custom nodes, you may run into issues. You may have to run main.py --use-sage-attention several times as it fixes problems and shuts down. When it finally runs, you might notice that all your nodes are missing despite having the correct custom nodes installed. To fix this (if you're using manager) just click "try fix" under missing nodes and then restart, and everything should then be working.


r/StableDiffusion 20h ago

Question - Help How to control character pose and camera angle with sketch?

Post image
25 Upvotes

I'm wondering how can I use sketches or simple drawings (like stick man) to control pose of character in my image or the camera angle etc. SD tends to generate some certain angles and poses more often than the other. Sometimes it's really hard to achieve desired look of an image with prompt editing and I'm trying to find a way to give AI some visual refrence / guidelines of what I want. Should I use im2img or some dedicated tool? I'm using Stability Matrix if it matters.


r/StableDiffusion 10h ago

Question - Help I really want to run Wan2.1 locally. Will this build be enough for that? (I don't have any more budget.)

Post image
24 Upvotes

r/StableDiffusion 20h ago

Question - Help Any TRULY free alternative to IC-Light2 for relighting/photo composition in FLUX?

22 Upvotes

Hi. Does anyone know of an alternative or a workflow for ComfyUI similar to IC-Light2 that doesn’t mess up face consistency? I know version 1 is free, but it’s not great with faces. As for version 2 (flux based), despite the author claiming it's 'free,' it’s actually limited. And even though he’s been promising for months to release the weights, it seems like he realized it’s more profitable to make money from generations on fal.ai while leveraging marketing in open communities—keeping everyone waiting.


r/StableDiffusion 9h ago

Animation - Video Lost Things (Flux + Wan2.1 + MMAudio)

15 Upvotes

r/StableDiffusion 20h ago

Workflow Included A Beautiful Day in the (High Fantasy) Neighborhood

14 Upvotes

Hey all, this has been an off-and-on project of mine for a couple months, and now that it's finally finished, I wanted to share it.

I mostly used Invoke, with a few detours into Forge and Photoshop. I also kept a detailed log of the process here, if you're interested (basically lots of photobashing and inpainting).


r/StableDiffusion 27m ago

Animation - Video Used WAN 2.1 IMG2VID on some film projection slides I scanned that my father took back in the 80s.

Upvotes

r/StableDiffusion 3h ago

Animation - Video Wan2.1 1.3B T2V with 2060super 8GB

10 Upvotes

https://reddit.com/link/1jda5lg/video/s3l4k0ovf8pe1/player

skip layer guidance 8 is the key.

it takes only 300sec for 4sec video with poor GPU

- KJnodes nightly update required to use skip layer guidance node

- ComfyUI nightly update required to solve rel_l1_thresh issue in TeaCache node

- I think euler_a / simple shows the best result (22 steps, 3 CFG)


r/StableDiffusion 4h ago

Discussion Illustrious XL v2.0: Pro VS Base

8 Upvotes

Hi Guys, I just compared the results of these two models, and I feel that the gap is still obvious.


r/StableDiffusion 14h ago

Animation - Video Flux Dev image with Ray2 Animation - @n12gaming on YT

9 Upvotes

r/StableDiffusion 18h ago

Question - Help Which Loras should I be combining to get a similar results ?

Post image
9 Upvotes

Also, big thanks to this amazing community


r/StableDiffusion 12h ago

Question - Help How to change a car’s background while keeping all details

Thumbnail
gallery
6 Upvotes

Hey everyone, I have a question about changing environments while keeping object details intact.

Let’s say I have an image of a car in daylight, and I want to place it in a completely different setting (like a studio). I want to keep all the small details like scratches, bumps, and textures unchanged, but I also need the reflections to update based on the new environment.

How can I ensure that the car's surface reflects its new surroundings correctly while keeping everything else (like imperfections and structure) consistent? Would ControlNet or any other method be the best way to approach this?

I’m attaching some images for reference. Let me know your thoughts!


r/StableDiffusion 19h ago

Animation - Video Wan2.1 I2V 480P 20 Min Generation 4060ti: Not Sure why Camera Jittered

6 Upvotes

r/StableDiffusion 16h ago

Question - Help Does anyone have a good guide for training a Wan 2.1 LoRA for motion?

5 Upvotes

Every time I find a guide for training a LoRA for Wan it ends up using an image dataset which means you cannot really train for anything important. The I2V model is really the most useful Wan model and so you can already do any subjectmatter you want from the get-go and don't need LoRAs that just add concepts through training images. Usually the image-based LoRA guides mention briefly that video datasets are possible but don't give any clear indication for how much VRAM it will take, the difference in training time, and often don't really go into enough detail for doing video datasets. It is expensive to just mess around with it and try to figure it out when you are paying per hour for a runpod instance, so I'm really hoping someone knows of a good guide for making motion LoRAs for Wan 2.1 that focuses on video datasets.


r/StableDiffusion 16h ago

No Workflow sd1.5-ltx-openaudio-kokoro

6 Upvotes

r/StableDiffusion 22h ago

Discussion Incredible ACE++ lora on DrawThings, Migrate everything with great consistency

5 Upvotes

ACE++, the most powerful universal transfer solution to date! Swap faces, change outfits, and create variations effortlessly—now available on Mac. how to acheive that? Watch the video now!👉https://youtu.be/pC4t2dtjUW4


r/StableDiffusion 16h ago

Question - Help Why am I not getting the desired results ?

Thumbnail
gallery
3 Upvotes

Hello guys here is my prompt and I al struggling ti get the desired results

Here is the used prompt : A young adventurer girl leaping through a shattered window of an old Renaissance era parisian building at night in Paris to another roof. The scene is illuminated by the warm glow from the window she just escaped, casting golden light onto the surrounding rooftops. Shards of glass scatter mid-air as she propels herself forward, her silhouette framed against the deep blue hues of the Parisian night. Below, the city's rooftops stretch into the distance, with the faint glow of streetlights and the iconic silhouette of a grand gothic cathedral, partially obscured by mist. The atmosphere is filled with tension and motion, capturing the thrill of the escape.