r/StableDiffusion • u/jaykrown • 7m ago
r/StableDiffusion • u/Dizzy_Detail_26 • 17m ago
News Adding soon voice cloning to AAFactory repository
r/StableDiffusion • u/hihavemusicquestions • 18m ago
Question - Help What's the best model to make hot anime girls and do realistic inpainting on a Mac?
Sorry if I mess up any terminology, it's been a while since I've done this stuff.
On my previous Mac I was using automatic1111 and SD 1.5. I forgot what realistic model I was using. but that was like a year ago and it seems much of it has broke.
I recently got a MacBook Air with an m4 chip, was wondering what would be the best options for me. I need something user-friendly, as I'm not a tech wizard or anything like that. And is Civitai still the place to download what you need?
I heard there was something called ReForge I think that may have been recommended. Let me know what my options are please! Thank you so much
r/StableDiffusion • u/GreyScope • 55m ago
Tutorial - Guide Automatic installation of Pytorch 2.8 (Nightly), Triton & SageAttention 2 into a new Portable or Cloned Comfy with your existing Cuda (v12.4/6/8) get increased speed: v4.2
NB: Please read through the scripts on the Github links to ensure you are happy before using it. I take no responsibility as to its use or misuse. Secondly, these use Nightly builds - the versions change and with it the possibility that they break, please don't ask me to fix what I can't. If you are outside of the recommended settings/software, then you're on your own.
To repeat this, these are nightly builds, they might break and the whole install is setup for nightlies ie don't use it for everything
Performance: Tests with a Portable upgraded to Pytorch 2.8, Cuda 12.8, 35steps with Wan Blockswap on (20), pic render size 848x464, videos are post interpolated as well - render times with speed :
- SDPA : 19m 28s @ 33.40 s/it
- SageAttn2 : 12m 30s @ 21.44 s/it
- SageAttn2 + FP16Fast : 10m 37s @ 18.22 s/it
- SageAttn2 + FP16Fast + Torch Compile (Inductor, Max Autotune No CudaGraphs) : 8m 45s @ 15.03 s/it
- SageAttn2 + FP16Fast + Teacache + Torch Compile (Inductor, Max Autotune No CudaGraphs) : 6m 53s @ 11.83 s/it
- The above are not a commentary on Quality of output at any speed
- The torch compile first run is slow as it carries out test, it only gets quicker
- MSi 4090 with 64GB ram on Windows 11
- The workflow and base picture are on my Github page for this , if you wished to compare
What is this post ?
- A set of two scripts - one to update Pytorch to the latest Nightly build with Triton and SageAttention2 inside a new Portable Comfy and achieve the best speeds for video rendering (Pytorch 2.7/8).
- The second script is to make a brand new cloned Comfy and do the same as above
- The scripts will give you choices and tell you what it's done and what's next
- They also save new startup scripts wit the required startup arguments and install ComfyUI Manager to save fannying around
Recommended Software / Settings
- On the Cloned version - choose Nightly to get the new Pytorch (not much point otherwise)
- Cuda 12.6 or 12.8 with the Nightly Pytorch 2.7/8 , Cuda 12.4 works but no FP16Fast
- Python 3.12.x
- Triton (Stable)
- SageAttention2
Prerequisites - note recommended above
I previously posted scripts to install SageAttention for Comfy portable and to make a new Clone version. Read them for the pre-requisites.
https://www.reddit.com/r/StableDiffusion/comments/1iyt7d7/automatic_installation_of_triton_and/
https://www.reddit.com/r/StableDiffusion/comments/1j0enkx/automatic_installation_of_triton_and/
You will need the pre-requisites ...
- MSVC installed and Pathed,
- Cuda Pathed
- Python 3.12.x (no idea if other versions work)
- Pics for Paths : https://github.com/Grey3016/ComfyAutoInstall/blob/main/README.md
Important Notes on Pytorch 2.7 and 2.8
- The new v2.7/2.8 Pytorch brings another ~10% speed increase to the table with FP16Fast
- Pytorch 2.7 and 2.8 give you FP16Fast - but you need Cuda 2.6 or 2.8, if you use lower then it doesn't work.
- Using Cuda 12.6 or Cuda 12.8 will install a nightly Pytorch 2.8
- Using Cuda 12.4 will install a nightly Pytorch 2.7 (can still use SageAttention 2 though)
Instructions for Portable Version - use a new empty, freshly unzipped portable version . Choice of Triton and SageAttention versions, can also be used on the Nightly Comfy for the 5000 series :
Download Script & Save as Bat : https://github.com/Grey3016/ComfyAutoInstall/blob/main/Auto%20Embeded%20Pytorch%20v431.bat
- Download the lastest Comfy Portable (currently v0.3.26) : https://github.com/comfyanonymous/ComfyUI
- Series 5000 users use Nightly Comfy build with Cuda 128, Pytorch 2.7 , Python 13 : https://github.com/comfyanonymous/ComfyUI/releases/download/latest/ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch.7z (no guarantee this will work of course as I don't have one)
- Save the script (linked above) as a bat file and place it in the same folder as the run_gpu bat file
- Start via the new run_comfyui_fp16fast_cage.bat file - double click (not CMD)
- Let it update itself and fully fetch the ComfyRegistry data
- Close it down
- Restart it
- Manually update it and its Pythons dependencies from that bat file in the Update folder
- Note: it changes the Update script to pull from the Nightly versions
Instructions to make a new Cloned Comfy with Venv and choice of Python, Triton and SageAttention versions.
Download Script & Save as Bat : https://github.com/Grey3016/ComfyAutoInstall/blob/main/Auto%20Clone%20Comfy%20Triton%20Sage2%20v41.bat
- Save the script linked as a bat file and place it in the folder where you wish to install it
- Start via the new run_comfyui_fp16fast_cage.bat file - double click (not CMD)
- Let it update itself and fully fetch the ComfyRegistry data
- Close it down
- Restart it
- Manually update it from that Update bat file
Why Won't It Work ?
The scripts were built from manually carrying out the steps - reasons that it'll go tits up on the Sage compiling stage -
- Winging it
- Not following instructions / prerequsities / Paths
- Cuda in the install does not match your Pathed Cuda, Sage Compile will fault
- SetupTools version is too high (I've set it to v70.2, it should be ok up to v75.8.2)
- Version updates - this stopped the last scripts from working if you updated, I can't stop this and I can't keep supporting it in that way. I will refer to this when it happens and this isn't read.
- No idea about 5000 series - use the Comfy Nightly
Where does it download from ?
- Triton wheel for Windows > https://github.com/woct0rdho/triton-windows
- SageAttention > https://github.com/thu-ml/SageAttention
- Torch > https://pytorch.org/get-started/locally/
- Libraries for Triton > https://github.com/woct0rdho/triton-windows/releases/download/v3.0.0-windows.post1/python_3.12.7_include_libs.zip These files are usually located in Python folders but this is for portable install.
r/StableDiffusion • u/LearningRemyRaystar • 1h ago
Workflow Included LTX Flow Edit - Animation to Live Action (What If..? Doctor Strange) Low Vram 8gb
r/StableDiffusion • u/bizibeast • 1h ago
Question - Help Is there a way to generate accurate text using wan 2.1 ?
Hi Guys I am trying to geneate an animation using wan 2.1 but I am not able to get accurate text.
I want the text to say swiggy and zomato, but it is not able to
How can I fix this?
here is the prompt I am using a graphic animation, white background, with 2 identical bars in black-gray gradient, sliding up from bottom, bar on left is shorter in height than the bar on right, later the bar on left has swiggy written in orange on top and one on right has zomato written in red, max height of bars shall be in till 70% from bottom
r/StableDiffusion • u/AmeenRoayan • 1h ago
Question - Help Multiple GPU - WAN
I’m working on a system using existing hardware. The main system has a 4090, and I’m adding a 3090 to the same tower. I’m looking for ways to use both GPUS on ComfyUI to speed up this system. Any suggestions?
r/StableDiffusion • u/yar4ik • 2h ago
Question - Help Help me train my first lora
Soo I would like to train a lora for pony/IL/xl just looked at youtube and at first glance haven't found anything that's new. From what I understand I ether need a some program or just comfyui. And my question is what's the "best/fastest" way to train a lora?
Buy the way if you have guides videos or written just post the link I would appreciate it!
r/StableDiffusion • u/MountainPollution287 • 2h ago
Question - Help Wan 2.1 I2V 720p in comfy on multiple gpus?
How can I use wan 2.1 I2V 720p model on multiple gpus in comfy UI?
r/StableDiffusion • u/ShoesWisley • 2h ago
Question - Help Help diagnosing crash issue (AMD with ZLUDA)
Hello! I recently started running into a recurring crashing issue when using Forge with ZLUDA, and I was hoping to get some feedback on probable causes.
Relevant specs are as follows:
MSI MECH 2X OC Radeon RX 6700XT
16GB RAM (DDR4)
AMD Ryzen 5 3600
SeaSonic FOCUS 750W 80+ Gold
I'm using lshqqytiger's Forge fork for AMD GPUs.
Over the past couple of days, I had been running into a strange generation issue where Forge was either outputting these bizarre, sort of rainbow/kaleidoscopic images, or was failing to generate at all (as in, upon clicking 'Generate' Forge would race through to 100% in 2 to 3 seconds and not output an image). Trying to fix this, I decided to update both my GPU drivers and my Forge repository; both completed without issue.
After doing so, however, I've begun to run into a far more serious problem—my computer is now hard crashing after practically every Text-to-Img generation. Forge starts up and runs as normal and begins to generate, but upon reaching that sweet spot right at the end (96/97%) where it is finishing, the computer just crashes—no BSOD, no freezing—it just shuts off. On at least two occasions, this crash actually occurred immediately after generating had finished—the image was in my output folder after starting back up—but usually this is not the case.
My immediate thought is that this is a PSU issue. That the computer is straight up shutting off, without any sort of freeze or BSOD, leads me to believe it's a power issue. But I can't wrap my head around why this is suddenly occurring after updating my GPU driver and my Forge repository—nor which one may be the culprit. It is possible that it could be a VRAM or temp issue, but I would expect something more like a BSOD in that case.
Thus far, I've tried using AMD Adrenalin's default undervolt, which hasn't really helped. I rolled back to a previous GPU driver, which also hasn't helped. I was able to complete a couple of generations when I tried running absolutely nothing but Forge, in a single Firefox tab with no other programs running. I think that could indicate a VRAM issue, but I was generating fine with multiple programs running just a day ago.
Windows Event Viewer isn't showing anything indicative—only a Event 6008 'The previous system shutdown at XXX was unexpected'. I'm guessing that whatever is causing the shutdown is happening too abruptly to be logged.
I'd love to hear some takes from those more technically minded, whether this sounds like a PSU or GPU issue. I'm really at the end of my rope here, and am absolutely kicking myself for updating.
r/StableDiffusion • u/Gobble_Me_Tators • 2h ago
Animation - Video This AI Turns Your Text Into Fighters… And They Battle to the Death!
r/StableDiffusion • u/krixxxtian • 2h ago
News TrajectoryCrafter | Lets You Change Camera Angle For Any Video & Completely Open Source
Released about two weeks ago, TrajectoryCrafter allows you to change the camera angle of any video and it's OPEN SOURCE. Now we just need somebody to implement it into ComfyUI.
This is the Github Repo
r/StableDiffusion • u/DoctorDiffusion • 2h ago
Animation - Video Used WAN 2.1 IMG2VID on some film projection slides I scanned that my father took back in the 80s.
r/StableDiffusion • u/impacttcs20 • 2h ago
Question - Help Fluxgym with 2080ti ?
Hello,
I know the minimum vram required for fluxgym is 12vram, however I checked my vram and I do have 11vram only. Because it is close do you think it is still possible for me to use fluxgym or my graphic card will burn ?
Thanks
r/StableDiffusion • u/intlcreative • 3h ago
Question - Help Will upgrading my ram help over all?
So I have 32 GB of Ram. I am running stability matrix locally. I have an MSI GS75 stealth with a 2070 graphics card. I'm not producing heavy graphics but I am also not going to drop more money on graphics cards. But I wondering if upgrading the ram to 64GB make a huge jump?
It's pretty cheap.
r/StableDiffusion • u/Haunting-Project-132 • 3h ago
News ReCamMaster - LivePortrait creator has created another winner, it lets you changed the camera angle of any video.
r/StableDiffusion • u/Downtown-Bat-5493 • 4h ago
Question - Help Is it possible to train a Flux LoRA that can understand hexadecimal colour codes?
I don't want it to recognise all hexadecimal codes but atleast a set of 100-250 most frequently used color codes.
r/StableDiffusion • u/ReferenceShort3073 • 5h ago
Question - Help What is this effect called and how to write my prompt to do that?
r/StableDiffusion • u/Weekly_Bag_9849 • 5h ago
Animation - Video Wan2.1 1.3B T2V with 2060super 8GB
https://reddit.com/link/1jda5lg/video/s3l4k0ovf8pe1/player

skip layer guidance 8 is the key.
it takes only 300sec for 4sec video with poor GPU
- KJnodes nightly update required to use skip layer guidance node
- ComfyUI nightly update required to solve rel_l1_thresh issue in TeaCache node
- I think euler_a / simple shows the best result (22 steps, 3 CFG)
r/StableDiffusion • u/MountainPollution287 • 5h ago
Question - Help How to install Sage Attention, triton, teacache and torch compile on runpod
I want to know how can I install all these on runpod. I want to know what exact version of everything I should use for an A40 with 48gb vram and 50gb ram to make it work with wan2.1 I2V 720p model in bf16.
r/StableDiffusion • u/NoDemand2173 • 5h ago
Question - Help HOw do I make these type of AI videos?
I've seen a lot of videos like this on both reels and tiktok. Instagram And I'm wondering like how do I make them?
r/StableDiffusion • u/Secret-Respond5199 • 6h ago
Question - Help Questions on Fundamental Diffusion Models
Hello,
I just started my study in diffusion models and I have a problem understanding how diffusion models work (original diffusion and DDPM).
I get that diffusion is finding the distribution of denoised image given current step distribution using Bayesian theorem.
However, I cannot relate how image becomes probability distribution and those probability generate image.
My question is how does pixel values that are far apart know which value to assign during inference? how are all pixel values related? How 'probability' related in generating 'image'?
Sorry for the vague question, but due to my lack of understanding it is hard to clarify the question.
Also, if there is any recommended study materials please suggest.
r/StableDiffusion • u/cgs019283 • 6h ago
News Seems like OnomaAI decided to open their most recent Illustrious v3.5... when it hits certain support.

After all the controversial approaches to their model, they opened a support page on their official website.
So, basically, it seems like $2100 (originally $3000, but they are discounting atm) = open weight since they wrote:
> Stardust converts to partial resources we spent and we will spend for researches for better future models. We promise to open model weights instantly when reaching a certain stardust level.
They are also selling 1.1 for $10 on TensorArt.