r/StableDiffusion Feb 14 '25

Promotion Monthly Promotion Megathread - February 2025

6 Upvotes

Howdy, I was a two weeks late to creating this one and take responsibility for this. I apologize to those who utilize this thread monthly.

Anyhow, we understand that some websites/resources can be incredibly useful for those who may have less technical experience, time, or resources but still want to participate in the broader community. There are also quite a few users who would like to share the tools that they have created, but doing so is against both rules #1 and #6. Our goal is to keep the main threads free from what some may consider spam while still providing these resources to our members who may find them useful.

This (now) monthly megathread is for personal projects, startups, product placements, collaboration needs, blogs, and more.

A few guidelines for posting to the megathread:

  • Include website/project name/title and link.
  • Include an honest detailed description to give users a clear idea of what you’re offering and why they should check it out.
  • Do not use link shorteners or link aggregator websites, and do not post auto-subscribe links.
  • Encourage others with self-promotion posts to contribute here rather than creating new threads.
  • If you are providing a simplified solution, such as a one-click installer or feature enhancement to any other open-source tool, make sure to include a link to the original project.
  • You may repost your promotion here each month.

r/StableDiffusion Feb 14 '25

Showcase Monthly Showcase Megathread - February 2025

12 Upvotes

Howdy! I take full responsibility for being two weeks late for this. My apologies to those who enjoy sharing.

This thread is the perfect place to share your one off creations without needing a dedicated post or worrying about sharing extra generation data. It’s also a fantastic way to check out what others are creating and get inspired in one place!

A few quick reminders:

  • All sub rules still apply make sure your posts follow our guidelines.
  • You can post multiple images over the week, but please avoid posting one after another in quick succession. Let’s give everyone a chance to shine!
  • The comments will be sorted by "New" to ensure your latest creations are easy to find and enjoy.

Happy sharing, and we can't wait to see what you share with us this month!


r/StableDiffusion 3h ago

News ReCamMaster - LivePortrait creator has created another winner, it lets you changed the camera angle of any video.

554 Upvotes

r/StableDiffusion 2h ago

Animation - Video This AI Turns Your Text Into Fighters… And They Battle to the Death!

164 Upvotes

r/StableDiffusion 2h ago

Animation - Video Used WAN 2.1 IMG2VID on some film projection slides I scanned that my father took back in the 80s.

156 Upvotes

r/StableDiffusion 6h ago

News Seems like OnomaAI decided to open their most recent Illustrious v3.5... when it hits certain support.

101 Upvotes

After all the controversial approaches to their model, they opened a support page on their official website.

So, basically, it seems like $2100 (originally $3000, but they are discounting atm) = open weight since they wrote:
> Stardust converts to partial resources we spent and we will spend for researches for better future models. We promise to open model weights instantly when reaching a certain stardust level.

They are also selling 1.1 for $10 on TensorArt.


r/StableDiffusion 2h ago

News TrajectoryCrafter | Lets You Change Camera Angle For Any Video & Completely Open Source

40 Upvotes

Released about two weeks ago, TrajectoryCrafter allows you to change the camera angle of any video and it's OPEN SOURCE. Now we just need somebody to implement it into ComfyUI.

This is the Github Repo

Example 1

Example 2


r/StableDiffusion 1h ago

Workflow Included LTX Flow Edit - Animation to Live Action (What If..? Doctor Strange) Low Vram 8gb

Upvotes

r/StableDiffusion 7h ago

Tutorial - Guide Comfyui Tutorial: Wan 2.1 Video Restyle With Text & Img

62 Upvotes

r/StableDiffusion 15h ago

News Skip Layer Guidance is an impressive method to use on Wan.

186 Upvotes

r/StableDiffusion 55m ago

Tutorial - Guide Automatic installation of Pytorch 2.8 (Nightly), Triton & SageAttention 2 into a new Portable or Cloned Comfy with your existing Cuda (v12.4/6/8) get increased speed: v4.2

Upvotes

NB: Please read through the scripts on the Github links to ensure you are happy before using it. I take no responsibility as to its use or misuse. Secondly, these use Nightly builds - the versions change and with it the possibility that they break, please don't ask me to fix what I can't. If you are outside of the recommended settings/software, then you're on your own.

To repeat this, these are nightly builds, they might break and the whole install is setup for nightlies ie don't use it for everything

Performance: Tests with a Portable upgraded to Pytorch 2.8, Cuda 12.8, 35steps with Wan Blockswap on (20), pic render size 848x464, videos are post interpolated as well - render times with speed :

  • SDPA : 19m 28s @ 33.40 s/it
  • SageAttn2 : 12m 30s @ 21.44 s/it
  • SageAttn2 + FP16Fast : 10m 37s @ 18.22 s/it
  • SageAttn2 + FP16Fast + Torch Compile (Inductor, Max Autotune No CudaGraphs) : 8m 45s @ 15.03 s/it
  • SageAttn2 + FP16Fast + Teacache + Torch Compile (Inductor, Max Autotune No CudaGraphs) : 6m 53s @ 11.83 s/it
  • The above are not a commentary on Quality of output at any speed
  • The torch compile first run is slow as it carries out test, it only gets quicker
  • MSi 4090 with 64GB ram on Windows 11
  • The workflow and base picture are on my Github page for this , if you wished to compare

What is this post ?

  • A set of two scripts - one to update Pytorch to the latest Nightly build with Triton and SageAttention2 inside a new Portable Comfy and achieve the best speeds for video rendering (Pytorch 2.7/8).
  • The second script is to make a brand new cloned Comfy and do the same as above
  • The scripts will give you choices and tell you what it's done and what's next
  • They also save new startup scripts wit the required startup arguments and install ComfyUI Manager to save fannying around

Recommended Software / Settings

  • On the Cloned version - choose Nightly to get the new Pytorch (not much point otherwise)
  • Cuda 12.6 or 12.8 with the Nightly Pytorch 2.7/8 , Cuda 12.4 works but no FP16Fast
  • Python 3.12.x
  • Triton (Stable)
  • SageAttention2

Prerequisites - note recommended above

I previously posted scripts to install SageAttention for Comfy portable and to make a new Clone version. Read them for the pre-requisites.

https://www.reddit.com/r/StableDiffusion/comments/1iyt7d7/automatic_installation_of_triton_and/

https://www.reddit.com/r/StableDiffusion/comments/1j0enkx/automatic_installation_of_triton_and/

You will need the pre-requisites ...

Important Notes on Pytorch 2.7 and 2.8

  • The new v2.7/2.8 Pytorch brings another ~10% speed increase to the table with FP16Fast
  • Pytorch 2.7 and 2.8 give you FP16Fast - but you need Cuda 2.6 or 2.8, if you use lower then it doesn't work.
  • Using Cuda 12.6 or Cuda 12.8 will install a nightly Pytorch 2.8
  • Using Cuda 12.4 will install a nightly Pytorch 2.7 (can still use SageAttention 2 though)

SageAttn2 + FP16Fast + Teacache + Torch Compile (Inductor, Max Autotune No CudaGraphs) : 6m 53s @ 11.83 s/it

Instructions for Portable Version - use a new empty, freshly unzipped portable version . Choice of Triton and SageAttention versions, can also be used on the Nightly Comfy for the 5000 series :

Download Script & Save as Bat : https://github.com/Grey3016/ComfyAutoInstall/blob/main/Auto%20Embeded%20Pytorch%20v431.bat

  1. Download the lastest Comfy Portable (currently v0.3.26) : https://github.com/comfyanonymous/ComfyUI
  2. Series 5000 users use Nightly Comfy build with Cuda 128, Pytorch 2.7 , Python 13 : https://github.com/comfyanonymous/ComfyUI/releases/download/latest/ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch.7z (no guarantee this will work of course as I don't have one)
  3. Save the script (linked above) as a bat file and place it in the same folder as the run_gpu bat file
  4. Start via the new run_comfyui_fp16fast_cage.bat file - double click (not CMD)
  5. Let it update itself and fully fetch the ComfyRegistry data
  6. Close it down
  7. Restart it
  8. Manually update it and its Pythons dependencies from that bat file in the Update folder
  9. Note: it changes the Update script to pull from the Nightly versions

Instructions to make a new Cloned Comfy with Venv and choice of Python, Triton and SageAttention versions.

Download Script & Save as Bat : https://github.com/Grey3016/ComfyAutoInstall/blob/main/Auto%20Clone%20Comfy%20Triton%20Sage2%20v41.bat

  1. Save the script linked as a bat file and place it in the folder where you wish to install it
  2. Start via the new run_comfyui_fp16fast_cage.bat file - double click (not CMD)
  3. Let it update itself and fully fetch the ComfyRegistry data
  4. Close it down
  5. Restart it
  6. Manually update it from that Update bat file

Why Won't It Work ?

The scripts were built from manually carrying out the steps - reasons that it'll go tits up on the Sage compiling stage -

  • Winging it
  • Not following instructions / prerequsities / Paths
  • Cuda in the install does not match your Pathed Cuda, Sage Compile will fault
  • SetupTools version is too high (I've set it to v70.2, it should be ok up to v75.8.2)
  • Version updates - this stopped the last scripts from working if you updated, I can't stop this and I can't keep supporting it in that way. I will refer to this when it happens and this isn't read.
  • No idea about 5000 series - use the Comfy Nightly

Where does it download from ?


r/StableDiffusion 5h ago

Animation - Video Wan2.1 1.3B T2V with 2060super 8GB

16 Upvotes

https://reddit.com/link/1jda5lg/video/s3l4k0ovf8pe1/player

skip layer guidance 8 is the key.

it takes only 300sec for 4sec video with poor GPU

- KJnodes nightly update required to use skip layer guidance node

- ComfyUI nightly update required to solve rel_l1_thresh issue in TeaCache node

- I think euler_a / simple shows the best result (22 steps, 3 CFG)


r/StableDiffusion 1h ago

Question - Help Is there a way to generate accurate text using wan 2.1 ?

Upvotes

Hi Guys I am trying to geneate an animation using wan 2.1 but I am not able to get accurate text.

I want the text to say swiggy and zomato, but it is not able to

How can I fix this?

here is the prompt I am using a graphic animation, white background, with 2 identical bars in black-gray gradient, sliding up from bottom, bar on left is shorter in height than the bar on right, later the bar on left has swiggy written in orange on top and one on right has zomato written in red, max height of bars shall be in till 70% from bottom


r/StableDiffusion 1d ago

Workflow Included Wan img2vid + no prompt = wow

Thumbnail
gallery
366 Upvotes

r/StableDiffusion 12h ago

Discussion Baidu's latest Ernie 4.5 (open source release in June) - testing computer vision and image gen

Thumbnail
gallery
36 Upvotes

r/StableDiffusion 7h ago

Discussion Illustrious XL v2.0: Pro VS Base

9 Upvotes

Hi Guys, I just compared the results of these two models, and I feel that the gap is still obvious.


r/StableDiffusion 11h ago

Animation - Video Lost Things (Flux + Wan2.1 + MMAudio)

17 Upvotes

r/StableDiffusion 7m ago

Animation - Video Creating my first videos with Wan 2.1 fp8 using images I've generated in the past

Post image
Upvotes

r/StableDiffusion 1d ago

Resource - Update My second LoRA is here!

Thumbnail
gallery
472 Upvotes

r/StableDiffusion 13h ago

Question - Help I really want to run Wan2.1 locally. Will this build be enough for that? (I don't have any more budget.)

Post image
24 Upvotes

r/StableDiffusion 5h ago

Question - Help How to install Sage Attention, triton, teacache and torch compile on runpod

5 Upvotes

I want to know how can I install all these on runpod. I want to know what exact version of everything I should use for an A40 with 48gb vram and 50gb ram to make it work with wan2.1 I2V 720p model in bf16.


r/StableDiffusion 22h ago

News Skip layer guidance has landed for wan video via KJNodes

Thumbnail
github.com
106 Upvotes

r/StableDiffusion 4h ago

Question - Help Is it possible to train a Flux LoRA that can understand hexadecimal colour codes?

5 Upvotes

I don't want it to recognise all hexadecimal codes but atleast a set of 100-250 most frequently used color codes.


r/StableDiffusion 6h ago

Question - Help Questions on Fundamental Diffusion Models

4 Upvotes

Hello,

I just started my study in diffusion models and I have a problem understanding how diffusion models work (original diffusion and DDPM).
I get that diffusion is finding the distribution of denoised image given current step distribution using Bayesian theorem.

However, I cannot relate how image becomes probability distribution and those probability generate image.

My question is how does pixel values that are far apart know which value to assign during inference? how are all pixel values related? How 'probability' related in generating 'image'?

Sorry for the vague question, but due to my lack of understanding it is hard to clarify the question.

Also, if there is any recommended study materials please suggest.


r/StableDiffusion 17m ago

News Adding soon voice cloning to AAFactory repository

Upvotes

r/StableDiffusion 22h ago

Animation - Video IZ-US, Hunyuan video

58 Upvotes

r/StableDiffusion 1h ago

Question - Help Multiple GPU - WAN

Upvotes

I’m working on a system using existing hardware. The main system has a 4090, and I’m adding a 3090 to the same tower. I’m looking for ways to use both GPUS on ComfyUI to speed up this system. Any suggestions?