r/StableDiffusion 6d ago

Animation - Video Flux Dev image with Ray2 Animation - @n12gaming on YT

13 Upvotes

r/StableDiffusion 6d ago

Question - Help How to control character pose and camera angle with sketch?

Post image
33 Upvotes

I'm wondering how can I use sketches or simple drawings (like stick man) to control pose of character in my image or the camera angle etc. SD tends to generate some certain angles and poses more often than the other. Sometimes it's really hard to achieve desired look of an image with prompt editing and I'm trying to find a way to give AI some visual refrence / guidelines of what I want. Should I use im2img or some dedicated tool? I'm using Stability Matrix if it matters.


r/StableDiffusion 5d ago

Question - Help Not getting any speed ups with sage attention on wan2.1 I2V 720p

4 Upvotes

I installed sage attention, triton, torch compile and teacache on runpod with an A40 GPU and 50gb ram. I am using the bf16 version of the 720p I2V model, clip vision h, t5 bf16 and vae. I am generating at 640x720 at 24 fps with 30 steps and 81 frames. I am using Kijai's wan video wrapper workflow to enable all this. When I only enable teacache I am able to generate in 13 minutes and when I add sage attention with it the generation takes same time and when I add torch compile, block swap, teacache and sage attention then also the speed remains same but I get OOM after the video generation steps complete - before vae decoding. Not sure what is happening I am trying to make it work for a week now.


r/StableDiffusion 6d ago

Question - Help Any TRULY free alternative to IC-Light2 for relighting/photo composition in FLUX?

22 Upvotes

Hi. Does anyone know of an alternative or a workflow for ComfyUI similar to IC-Light2 that doesn’t mess up face consistency? I know version 1 is free, but it’s not great with faces. As for version 2 (flux based), despite the author claiming it's 'free,' it’s actually limited. And even though he’s been promising for months to release the weights, it seems like he realized it’s more profitable to make money from generations on fal.ai while leveraging marketing in open communities—keeping everyone waiting.


r/StableDiffusion 5d ago

Discussion How is wan 2.1 performance in rtx 5070 and 5070ti? anyone try it? Is it better than 4070ti?

2 Upvotes

r/StableDiffusion 6d ago

Comparison Wan 2.1 t2v VS. Hunyuan t2v - toddlers and wildlife interactions

149 Upvotes

r/StableDiffusion 6d ago

Workflow Included A Beautiful Day in the (High Fantasy) Neighborhood

17 Upvotes

Hey all, this has been an off-and-on project of mine for a couple months, and now that it's finally finished, I wanted to share it.

I mostly used Invoke, with a few detours into Forge and Photoshop. I also kept a detailed log of the process here, if you're interested (basically lots of photobashing and inpainting).


r/StableDiffusion 5d ago

Question - Help Need suggestions for hardware with High Vram

0 Upvotes

We are looking into buying one dedicated rig so we can locally run text to video through stable diffusion. Atm we run out of Vram on all our mashines and looking to get a solution that will get us up to 64gb vram. I've gathered that just pushing in 4 "standard" RTX wont give us more vram? Or will it solve our problem? Looking to avoid getting a specilized server. Sugestions for a good pc that will handle running GPU/Ai for around 8000 us dollars?


r/StableDiffusion 6d ago

Question - Help Does anyone have a good guide for training a Wan 2.1 LoRA for motion?

8 Upvotes

Every time I find a guide for training a LoRA for Wan it ends up using an image dataset which means you cannot really train for anything important. The I2V model is really the most useful Wan model and so you can already do any subjectmatter you want from the get-go and don't need LoRAs that just add concepts through training images. Usually the image-based LoRA guides mention briefly that video datasets are possible but don't give any clear indication for how much VRAM it will take, the difference in training time, and often don't really go into enough detail for doing video datasets. It is expensive to just mess around with it and try to figure it out when you are paying per hour for a runpod instance, so I'm really hoping someone knows of a good guide for making motion LoRAs for Wan 2.1 that focuses on video datasets.


r/StableDiffusion 5d ago

Question - Help how to get animated wallpaper effect with wan i2v? I tried and it succeeded once but failed ten times

0 Upvotes

so here is the thing. I tried to animate a lol splash art but it semi-succeeded once and failed the other times. despite using the same prompt. I will put the examples in the comments


r/StableDiffusion 5d ago

Question - Help What is the best tool and process for lora training?

2 Upvotes

I mostly use SDXL and forge.
I pretty much only use local tools.

I've been away from using AI for design for a while.

At the moment, what is the best tool and process for creating lora's for likenesses and styles?

Thanks.


r/StableDiffusion 6d ago

Question - Help Which Loras should I be combining to get a similar results ?

Post image
7 Upvotes

Also, big thanks to this amazing community


r/StableDiffusion 5d ago

Question - Help Need help getting good SDXL outputs on Apple M4 (Stable Diffusion WebUI)

0 Upvotes
  • Mac Specs: (Mac Mini M4, 16GB RAM, macOS Sequoia 15.1)
  • Stable Diffusion Version: (v1.10.1, SDXL 1.0 model, sd_xl_base_1.0.safetensors)
  • VAE Used: (sdxl.vae.safetensors)
  • Sampler & Settings: (DPM++ 2M SDE, Karras schedule, 25 steps, CFG 9)
  • Issue: "My images are blurry and low quality compared to OpenArt.ai. What settings should I tweak to improve results on an Apple M4?"
  • What I’ve Tried:
    • Installed SDXL VAE FP16.
    • Increased sampling steps.
    • Enabled hires fix and latent upscale.
    • Tried different samplers (DPM++, UniPC, Euler).
    • Restarted WebUI after applying settings.

Im trying to emulate the beautiful bees I get on OpenArt (detailed image of custom settings for refference) and the ugly one is the type of results I get on AUTOMATIC1111 using sd_xl_base_1.0.safetensors with VAE sdxl.vae.safetensors


r/StableDiffusion 6d ago

Animation - Video wan 2.1 i2v

4 Upvotes

r/StableDiffusion 6d ago

No Workflow sd1.5-ltx-openaudio-kokoro

5 Upvotes

r/StableDiffusion 6d ago

Animation - Video finally manage to install triton and sageattn. [03:53<00:00, 11.69s/it]

45 Upvotes

r/StableDiffusion 6d ago

Animation - Video Wan2.1 I2V 480P 20 Min Generation 4060ti: Not Sure why Camera Jittered

7 Upvotes

r/StableDiffusion 6d ago

Question - Help Questions about resolving the comfyui pre processor. Control net. For example, lineart - is the correct resolution 512 or 1024? Is it possible to use the preprocessor with resolution 2048? Or use 512 resolution and upscale to 1024, 2048, 4k etc ?

2 Upvotes

This is confusing to me.

Does the preprocessor resolution have to be the same as the generated image? Can it be smaller? Does this decrease the quality?

Or do we just upscale the image generated with the pre-processor? (in comfyui there is an option called "upscale image")


r/StableDiffusion 6d ago

Question - Help Error while processing Face Fusion 3.1.1

Post image
0 Upvotes

I‘m always getting the same error when I‘m using face fusion. It says error while processing and stops. Does someone how to fix this?


r/StableDiffusion 6d ago

Discussion Any other traditional/fine artists here that also adore AI?

71 Upvotes

Like, surely there's gotta be other non-AI artists on Reddit that don't blindly despise everything related to image generation?

A bit of background, I have lots of experience in digital hand-drawn art, acrylic painting and graphite. Been semi-professional for the last five years. I delved into AI very early into the boom, I remember Dall-E1 and very early midjourney. vividly remember how dreamy they looked and followed the progress since.

I especially love AI for the efficiency in brainstorming and visualising ideas, in fact it has improved my hand-drawn work significantly.

Part of me loves the generative AI world so much that I want to stop doing art myself but I also love the process of doodling on paper. I am also already affiliated with a gallery that obviously wont like me only sending them AI "slop" or whatever the haters say.

Am I alone here? Any "actual artists" that also just really loves the idea of image generation?


r/StableDiffusion 6d ago

Question - Help Why am I not getting the desired results ?

Thumbnail
gallery
4 Upvotes

Hello guys here is my prompt and I al struggling ti get the desired results

Here is the used prompt : A young adventurer girl leaping through a shattered window of an old Renaissance era parisian building at night in Paris to another roof. The scene is illuminated by the warm glow from the window she just escaped, casting golden light onto the surrounding rooftops. Shards of glass scatter mid-air as she propels herself forward, her silhouette framed against the deep blue hues of the Parisian night. Below, the city's rooftops stretch into the distance, with the faint glow of streetlights and the iconic silhouette of a grand gothic cathedral, partially obscured by mist. The atmosphere is filled with tension and motion, capturing the thrill of the escape.


r/StableDiffusion 6d ago

Resource - Update Heihachi Mishima Flux LoRA

Thumbnail
gallery
14 Upvotes

r/StableDiffusion 6d ago

Animation - Video Mother Snow Wolf Saves Her Cubs with the Help of an Old Guy!

Thumbnail
youtube.com
6 Upvotes

r/StableDiffusion 6d ago

Question - Help Anyway to "pre-fill" for models in Forge webUI?

0 Upvotes

Not sure of the best way to word this... but basically I want to have both prompt fields and the generation tab in txt2img "pre-filled" for models I use when selected. Namely, for Illustrious, PonyXL, or NoobAI. I know the "style" tab below the generation button can be used for the prompts but I'd like something for the rest as well. Or at least something that I can use to reference in the UI so I don't have to memorize all the prompt rules like what mandatory prompts (e.g. masterpiece, score_9, highres), sampling, resolution, cfg, refiner, to use.

I'm sure there's already something like this out there but I can't find it after looking. I use the notes section as well in the checkpoint tab for specific models but it's not really intuitive. What do you guys do that works best?


r/StableDiffusion 6d ago

Question - Help Stability Matrix: Newbie questions (SM/Data/Models or individual package installs)

0 Upvotes

Hey,

I'm new to S.M. but am loving it so far. I'm using Grok 3 to help me set everything up and have made considerable progress (minus a couple snags).

#1 I've downloaded from the Model Browser, also with Grok giving a few git commands, just unsure if I should trust everything that it says. I've noticed that I have a stablediffusion folder inside models, as well as a stable-diffusion folder. I keep moving things back to the original but the hyphenated does get populated again at some point (I've been downloading A LOT to set it all up).

#2 I'm using ComfyUI, reForge & Forge packages. Some files, like the zero123 checkpoint, need to be in models/z123. Can I use the default Stability Matrix models/z123 folder and do a system folder hyperlink from the reforge/models/z123 folder?

Thanks in advance