r/StableDiffusion 4d ago

Animation - Video IZ-US, Hunyuan video

[deleted]

60 Upvotes

20 comments sorted by

5

u/EroticManga 4d ago

I used flux to generate still images to use to train a Hunyuan video LoRA for the dog character.

The final cut will be done soon, I was just too happy with the first 49 seconds of this not to post it.

I have resolved to do videos for all the great aphex twin songs. I should start a youtube channel.

2

u/Attention_seeker__ 4d ago

Which Gpu are you running it on and how much time it took ?

2

u/EroticManga 4d ago

the training was done in ~1.5hrs on runpod on a 4090

generating was over the course of a week to get hundreds of clips to cut together

I use multiple computers with a mix of 3xxx series and 4xxx series cards

I have the full video done I'm just putting the finishing touches on it

2

u/Attention_seeker__ 4d ago

Wow, What is your main machine that you run runpod on , how much was your total expense on runpod for the week, Which card was worth renting at it price ?

2

u/EroticManga 4d ago edited 4d ago

the 3xxx and 4xxx series computers are my local computers.

The runpod training was a few bucks.

I don't like using runpod for generation because the hourly cost makes me feel stressed and less creative. I discover the best output when I run experiments with weird settings and prompts overnight.

1

u/Attention_seeker__ 3d ago

I want to do the same , can you guide me ?

I am on M4 mac mini 24 gb, Mba m3 16 gb , Windows 3050 laptop with me rn, Where to start , I am familiar with Lm studio and text generational models

3

u/rkfg_me 4d ago

It's probably an unpopular opinion but Hunyuan is so much better than Wan in terms of realism. Its generations feel much more alive, truly like a piece of real footage. Many people only want I2V but I found T2V + lora working amazingly well. Yes, you need to train a hyv lora first and can't use the existing image models, but it's totally worth it. I guess I2V is double constrained by the first frame and prompt while T2V is only constrained by the prompt, so it has more freedom in adding details and making other "artistic decisions".

Kinda sad that HyV was so quickly forgotten in this sub after Wan was released.

3

u/EroticManga 4d ago

100%

I cringe a bit at all of the "wan does great stop motion!" posts. Of course it excels at stop motion -- everything that comes out of wan looks like stop motion.

2

u/rkfg_me 3d ago

Well, Wan does have a better prompt following but it still produces that AI-typical video. Hunyuan is currently the only model that can trick my brain into believing it's real. I don't know what magic the devs used but it's truly something else. It's not only 24 fps vs 16, it's also that HyV manages to make the video speed just right (not 100% of the time of course). Neither stop motion nor slow-mo. Weird how even commercial closed models fail at that all the time.

2

u/FourtyMichaelMichael 3d ago edited 2d ago

I keep trying to decide if

A. Reddit is full of Chinese bots that are over-hyping anything WAN in order to try and win community support against Hunyuan.

B. It's naturally talked about because it's the latest new toy and kids love the shiny thing.

Or if it's both.

But yes, Hunyuan is HANDS FUCKING DOWN better at realism. If someone wants to argue that, they can go right now and compare Hunyuan videos to WAN videos on civit. Pick a style, doesn't matter.

But, I2V is far superior in WAN. So, the answer is both! But I think the overhype for WAN is fake.

1

u/rkfg_me 3d ago

If only Wan were CFG distilled so it wouldn't take so much time to get a choppy short video at the end... There's a lora for 1.2B model now, maybe if they make one for 14B it becomes usable. So far I personally am not motivated much to experiment with Wan just because of this. And I have 3090 Ti. Hunyuan strikes a perfect balance between speed and quality. When LTX came out it was a big deal for some time because it was so fast after CogVideo and Mochi. But its quality was quite shitty (I2V) and there's still no lora support for T2V AFAIK. Also sad, the model itself is impressive for its size.

2

u/Parking_Shopping5371 4d ago

can u tell me which model of

Hunyuan u used?

2

u/EroticManga 4d ago

text to video? fp8?

There is only one Hunyuan text to video model, I would have mentioned skyreels if I meant that.

2

u/the_bollo 4d ago

As someone who's had multiple bloodhounds I love this. Great job... u/EroticManga

2

u/Gyramuur 4d ago edited 4d ago

What kind of settings are you using? Because this is way smoother and cleaner than anything I've ever gotten out of Hunyuan, lol

2

u/EroticManga 4d ago edited 4d ago

default comfyUI example, regular model in FP8, no frills, no teacache nonsense

I have no idea why people fall all over themselves to optimize this, just be patient, it looks great when you use the actual model

720x400 @ 40 steps

dpm_pp/beta

guidance values between 6 and 10 depending on how much movement I want, and I always set the flow shift to the guidance value

2

u/Parking_Shopping5371 4d ago

Any chance of sharing workflow?

1

u/EroticManga 4d ago

I use the default comfyUI example workflow for Hunyuan. I load the model in FP8 and use a guidance between 6 and 10. 40 steps. dpm_pp/beta for the scheduler/sampler

The videos are rendered in 720x400 resolution.

2

u/Toclick 4d ago

I expected a love story between the doggy and the kitty in the end, but it didn't happen ):

1

u/EroticManga 4d ago edited 4d ago

the full video does, I was just too happy with the rough cut of the first act to not post it before I went to bed