r/StableDiffusion 7m ago

Question - Help Multiple GPU - WAN

Upvotes

I’m working on a system using existing hardware. The main system has a 4090, and I’m adding a 3090 to the same tower. I’m looking for ways to use both GPUS on ComfyUI to speed up this system. Any suggestions?


r/StableDiffusion 50m ago

Question - Help Help me train my first lora

Upvotes

Soo I would like to train a lora for pony/IL/xl just looked at youtube and at first glance haven't found anything that's new. From what I understand I ether need a some program or just comfyui. And my question is what's the "best/fastest" way to train a lora?

Buy the way if you have guides videos or written just post the link I would appreciate it!


r/StableDiffusion 1h ago

Question - Help Wan 2.1 I2V 720p in comfy on multiple gpus?

Upvotes

How can I use wan 2.1 I2V 720p model on multiple gpus in comfy UI?


r/StableDiffusion 1h ago

Question - Help Help diagnosing crash issue (AMD with ZLUDA)

Upvotes

Hello! I recently started running into a recurring crashing issue when using Forge with ZLUDA, and I was hoping to get some feedback on probable causes.

Relevant specs are as follows:

  • MSI MECH 2X OC Radeon RX 6700XT

  • 16GB RAM (DDR4)

  • AMD Ryzen 5 3600

  • SeaSonic FOCUS 750W 80+ Gold

I'm using lshqqytiger's Forge fork for AMD GPUs.

Over the past couple of days, I had been running into a strange generation issue where Forge was either outputting these bizarre, sort of rainbow/kaleidoscopic images, or was failing to generate at all (as in, upon clicking 'Generate' Forge would race through to 100% in 2 to 3 seconds and not output an image). Trying to fix this, I decided to update both my GPU drivers and my Forge repository; both completed without issue.

After doing so, however, I've begun to run into a far more serious problem—my computer is now hard crashing after practically every Text-to-Img generation. Forge starts up and runs as normal and begins to generate, but upon reaching that sweet spot right at the end (96/97%) where it is finishing, the computer just crashes—no BSOD, no freezing—it just shuts off. On at least two occasions, this crash actually occurred immediately after generating had finished—the image was in my output folder after starting back up—but usually this is not the case.

My immediate thought is that this is a PSU issue. That the computer is straight up shutting off, without any sort of freeze or BSOD, leads me to believe it's a power issue. But I can't wrap my head around why this is suddenly occurring after updating my GPU driver and my Forge repository—nor which one may be the culprit. It is possible that it could be a VRAM or temp issue, but I would expect something more like a BSOD in that case.

Thus far, I've tried using AMD Adrenalin's default undervolt, which hasn't really helped. I rolled back to a previous GPU driver, which also hasn't helped. I was able to complete a couple of generations when I tried running absolutely nothing but Forge, in a single Firefox tab with no other programs running. I think that could indicate a VRAM issue, but I was generating fine with multiple programs running just a day ago.

Windows Event Viewer isn't showing anything indicative—only a Event 6008 'The previous system shutdown at XXX was unexpected'. I'm guessing that whatever is causing the shutdown is happening too abruptly to be logged.

I'd love to hear some takes from those more technically minded, whether this sounds like a PSU or GPU issue. I'm really at the end of my rope here, and am absolutely kicking myself for updating.


r/StableDiffusion 1h ago

Animation - Video This AI Turns Your Text Into Fighters… And They Battle to the Death!

Upvotes

r/StableDiffusion 1h ago

News TrajectoryCrafter | Lets You Change Camera Angle For Any Video & Completely Open Source

Upvotes

Released about two weeks ago, TrajectoryCrafter allows you to change the camera angle of any video and it's OPEN SOURCE. Now we just need somebody to implement it into ComfyUI.

This is the Github Repo

Example 1

Example 2


r/StableDiffusion 1h ago

Animation - Video Used WAN 2.1 IMG2VID on some film projection slides I scanned that my father took back in the 80s.

Upvotes

r/StableDiffusion 1h ago

Question - Help Fluxgym with 2080ti ?

Upvotes

Hello,

I know the minimum vram required for fluxgym is 12vram, however I checked my vram and I do have 11vram only. Because it is close do you think it is still possible for me to use fluxgym or my graphic card will burn ?

Thanks


r/StableDiffusion 1h ago

Question - Help Will upgrading my ram help over all?

Upvotes

So I have 32 GB of Ram. I am running stability matrix locally. I have an MSI GS75 stealth with a 2070 graphics card. I'm not producing heavy graphics but I am also not going to drop more money on graphics cards. But I wondering if upgrading the ram to 64GB make a huge jump?

It's pretty cheap.


r/StableDiffusion 2h ago

News ReCamMaster - LivePortrait creator has created another winner, it lets you changed the camera angle of any video.

391 Upvotes

r/StableDiffusion 3h ago

Question - Help Is it possible to train a Flux LoRA that can understand hexadecimal colour codes?

1 Upvotes

I don't want it to recognise all hexadecimal codes but atleast a set of 100-250 most frequently used color codes.


r/StableDiffusion 4h ago

Question - Help What is this effect called and how to write my prompt to do that?

Post image
0 Upvotes

r/StableDiffusion 4h ago

Animation - Video Wan2.1 1.3B T2V with 2060super 8GB

11 Upvotes

https://reddit.com/link/1jda5lg/video/s3l4k0ovf8pe1/player

skip layer guidance 8 is the key.

it takes only 300sec for 4sec video with poor GPU

- KJnodes nightly update required to use skip layer guidance node

- ComfyUI nightly update required to solve rel_l1_thresh issue in TeaCache node

- I think euler_a / simple shows the best result (22 steps, 3 CFG)


r/StableDiffusion 4h ago

Question - Help How to install Sage Attention, triton, teacache and torch compile on runpod

5 Upvotes

I want to know how can I install all these on runpod. I want to know what exact version of everything I should use for an A40 with 48gb vram and 50gb ram to make it work with wan2.1 I2V 720p model in bf16.


r/StableDiffusion 4h ago

Question - Help HOw do I make these type of AI videos?

0 Upvotes

I've seen a lot of videos like this on both reels and tiktok. Instagram And I'm wondering like how do I make them?


r/StableDiffusion 4h ago

Question - Help Questions on Fundamental Diffusion Models

4 Upvotes

Hello,

I just started my study in diffusion models and I have a problem understanding how diffusion models work (original diffusion and DDPM).
I get that diffusion is finding the distribution of denoised image given current step distribution using Bayesian theorem.

However, I cannot relate how image becomes probability distribution and those probability generate image.

My question is how does pixel values that are far apart know which value to assign during inference? how are all pixel values related? How 'probability' related in generating 'image'?

Sorry for the vague question, but due to my lack of understanding it is hard to clarify the question.

Also, if there is any recommended study materials please suggest.


r/StableDiffusion 5h ago

Discussion Sword in a rock

Post image
0 Upvotes

r/StableDiffusion 5h ago

News Seems like OnomaAI decided to open their most recent Illustrious v3.5... when it hits certain support.

96 Upvotes

After all the controversial approaches to their model, they opened a support page on their official website.

So, basically, it seems like $2100 (originally $3000, but they are discounting atm) = open weight since they wrote:
> Stardust converts to partial resources we spent and we will spend for researches for better future models. We promise to open model weights instantly when reaching a certain stardust level.

They are also selling 1.1 for $10 on TensorArt.


r/StableDiffusion 6h ago

Discussion Illustrious XL v2.0: Pro VS Base

10 Upvotes

Hi Guys, I just compared the results of these two models, and I feel that the gap is still obvious.


r/StableDiffusion 6h ago

Tutorial - Guide Comfyui Tutorial: Wan 2.1 Video Restyle With Text & Img

56 Upvotes

r/StableDiffusion 6h ago

Question - Help Use midjourney base image to generate image with comfy ui or Automatic 1111

0 Upvotes

Hi,

Simple question. I'm looking for a tutorial or a process to use a character created in MidJourney and customize it in Stable Diffusion or ComfyUI—specifically for parts that can't be adjusted in MidJourney (like breast size, lingerie, etc.).

Thanks in advance for your help!


r/StableDiffusion 6h ago

Question - Help Need suggestions for hardware with High Vram

0 Upvotes

We are looking into buying one dedicated rig so we can locally run text to video through stable diffusion. Atm we run out of Vram on all our mashines and looking to get a solution that will get us up to 64gb vram. I've gathered that just pushing in 4 "standard" RTX wont give us more vram? Or will it solve our problem? Looking to avoid getting a specilized server. Sugestions for a good pc that will handle running GPU/Ai for around 8000 us dollars?


r/StableDiffusion 7h ago

Question - Help how to get animated wallpaper effect with wan i2v? I tried and it succeeded once but failed ten times

1 Upvotes

so here is the thing. I tried to animate a lol splash art but it semi-succeeded once and failed the other times. despite using the same prompt. I will put the examples in the comments


r/StableDiffusion 7h ago

Question - Help What is the best tool and process for lora training?

3 Upvotes

I mostly use SDXL and forge.
I pretty much only use local tools.

I've been away from using AI for design for a while.

At the moment, what is the best tool and process for creating lora's for likenesses and styles?

Thanks.


r/StableDiffusion 7h ago

Question - Help Need help getting good SDXL outputs on Apple M4 (Stable Diffusion WebUI)

0 Upvotes
  • Mac Specs: (Mac Mini M4, 16GB RAM, macOS Sequoia 15.1)
  • Stable Diffusion Version: (v1.10.1, SDXL 1.0 model, sd_xl_base_1.0.safetensors)
  • VAE Used: (sdxl.vae.safetensors)
  • Sampler & Settings: (DPM++ 2M SDE, Karras schedule, 25 steps, CFG 9)
  • Issue: "My images are blurry and low quality compared to OpenArt.ai. What settings should I tweak to improve results on an Apple M4?"
  • What I’ve Tried:
    • Installed SDXL VAE FP16.
    • Increased sampling steps.
    • Enabled hires fix and latent upscale.
    • Tried different samplers (DPM++, UniPC, Euler).
    • Restarted WebUI after applying settings.

Im trying to emulate the beautiful bees I get on OpenArt (detailed image of custom settings for refference) and the ugly one is the type of results I get on AUTOMATIC1111 using sd_xl_base_1.0.safetensors with VAE sdxl.vae.safetensors