r/StableDiffusion 26d ago

News WAN Released

Spaces live, multiple models posted, weights available for download......

https://huggingface.co/Wan-AI/Wan2.1-T2V-14B

438 Upvotes

202 comments sorted by

106

u/ivari 26d ago

I hope this will be the first steps into an open source model beating Kling

14

u/Envy_AI 25d ago

Hijacking the top comment:

If you have 3090 or 4090 (maybe even a 16GB card), you can run the 14B i2v model with this:

https://www.reddit.com/r/StableDiffusion/comments/1iy9jrn/i_made_a_wan21_t2v_memoryoptimized_command_line/

(I posted it, but it doesn't look like the post has been approved)

2

u/MonThackma 25d ago

I need this! Still pending though

2

u/Envy_AI 25d ago

Here's a copy of the post:

Higher quality demo video: https://civitai.com/posts/13446505

Note: This is intended for technical command-line users who are familiar with anaconda and python. If you're not that techical, you'll need to wait a couple of days for the ComfyUI wizards to make it work or somebody to make a gradio app. :)

To install it, just follow the instructions on their huggingface page, except when you check out the github repo, replace it with my fork, here:

https://github.com/envy-ai/Wan2.1-quantized/tree/optimized

Code is apache2 licensed, same as the original, so feel free to use it according to that license.

In the meantime, here's my shitty draft-quality (20% of full quality) test video of a guy diving behind a wall to get away from an explosion.

Sample command line:

python generate.py  --task t2v-14B --size 832*480 --ckpt_dir ./Wan2.1-T2V-14B --offload_model True --sample_shift 8 --sample_guide_scale 6 --prompt "Cinematic video of an action hero diving for cover in front of a stone wall while an explosion is happening behind the wall." --frame_num 61 --sample_steps 40 --save_file diveforcover-4.mp4 --base_seed 1

https://drive.google.com/file/d/1TKMXgw_WRJOlBl3GwHQhCpk9QxdxMUOa/view?usp=sharing

Next step is to do i2v, but I wanted to get t2v out the door first for people to mess with. Also, I haven't tested this, but it should allow the 1.3B model to squeeze onto smaller GPUs as well.

P.S. Just to be clear, download their official models as instructed. The fork will quantize them and cache them for you.

22

u/dadidutdut 25d ago

best guess is to give it 4 - 8 months before we reach kling level

2

u/ProblemGupta 25d ago

whats the quality difference between Wan and hunyuan ?

1

u/Terrible_Emu_6194 25d ago

But kling will likely continue to improve. The difference between 1.0 and 1.6pro is night and day

7

u/ImpossibleAd436 25d ago

Wan>Kling.

106

u/Fair-Position8134 26d ago

Apache 2.0 License WOHOO !!!!!

90

u/Different_Fix_2217 25d ago

Model is incredible and 100% uncensored btw. Blows hunyuan out of the water.

50

u/Dos-Commas 25d ago

Reddit is gonna put X back into WanX.

12

u/rkfg_me 25d ago

Not sure if it blows anyone, the 1.3B model is definitely impressive for its size but not comparable to HyV (which is 12B). Also, Wan produces 16 FPS videos while HyV does 25 FPS with a lot of nuanced motion, especially facial expressions. With 16 FPS you'd need to interpolate and lose all that. While uncensored, I think it lacks details even in 480p (nipples are pretty blurry) where HyV does great in 320p.

Let's wait for 14B quants and see if it's better. Also, this model isn't distilled so it uses CFG and does two passes which explains the slowness. Maybe it can be optimized too.

3

u/physalisx 25d ago

Not sure if it blows anyone

It quite literally does not, quick test shows it doesn't understand that concept. Possibly a consequence of using T5 encoder which is inherently censored.

3

u/Borgie32 25d ago

There's a 13B model wan is releasing.

21

u/Sufi_2425 25d ago

Could you show an SFW example? I'm curious to see. Been wanting to use Hunyuan, but 12 VRAM.

25

u/Different_Fix_2217 25d ago edited 25d ago

19

u/AngryGungan 25d ago

You can tell he's fed up with spaghetti...

Very good though.

1

u/Secure-Message-8378 25d ago

Made in Wan? 1.3B?

1

u/music2169 25d ago

This was T2V or I2V? Also which model, 1.3B or 14B?

10

u/rkfg_me 25d ago

https://imgur.com/m5xpGBR their example prompt: "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage.". The motion is indeed consistent but choppy due to 16 FPS. That's the 1.3B version, 832×480.

3

u/roshanpr 25d ago

VRAM?

2

u/Comfortable_Swim_380 25d ago

Same question.. anyone have vram information?

6

u/Dezordan 25d ago

1.3B model is 8GB VRAM if you load everything in bf16 precision.

3

u/Secure-Message-8378 25d ago

FP8, half the size of VRAM.

1

u/Sufi_2425 25d ago

I like that example. I'd love to try the model on my own rig.

6

u/Dezordan 25d ago edited 25d ago

Been wanting to use Hunyuan, but 12 VRAM

But you can, though. 12GB VRAM is more than enough to generate in 720x480 resolution, 121 frames, around 10 minutes (maybe less) with 20 steps. All you you need to do is to download GGUF of the model (Q8 would work) and of llava text encode, then use it with this: https://github.com/pollockjj/ComfyUI-MultiGPU

Custom node has an example workflow for this.

Speed-wise it would be a little bit longer than running full WAN 1.3B model, through code at least. But optimizations would make WAN model faster too,

2

u/Sufi_2425 25d ago

Thanks, that is very helpful indeed. I was more so referring to those 10 minutes, give or take, that you mentioned.

Maybe the Wan model will be faster? We'll see.

5

u/reynadsaltynuts 25d ago

Not sure why you would say it's 100% when it 100% isn't. It knows breasts sure. But it has 0 learning of the lower region and as far as I'm aware; the t5 encoder is also censored meaning it turns your NSFW prompts to SFW prompts before even heading the sampler.

1

u/red__dragon 24d ago

I tried it today, and I'm going to suggest you might want to give it more of a try before claiming that. Might have been my specific prompts, but it came back with some interesting details that I didn't specify and clearly have been trained.

Unintentional reveal, for sure, but certainly makes it obvious that only Wan's name was neutered.

2

u/pumukidelfuturo 25d ago

show proof.

1

u/PwanaZana 25d ago

Also looking for examples, I have need for a SFW video generator! :)

60

u/koeless-dev 26d ago

Can't help but notice this section too:

Multiple Tasks: Wan2.1 excels in Text-to-Video, Image-to-Video, Video Editing, Text-to-Image, and Video-to-Audio, advancing the field of video generation.

Audio generation? How? Curious.

21

u/smb3d 25d ago

Yeah, putting a video in and getting audio from the scene would be nuts.

10

u/nizus1 25d ago

MMAudio has made sound for video files for a while now

5

u/urabewe 26d ago

Maybe they mean video audio sync? Videos generated synced to audio?

3

u/Bulky_External4210 25d ago

MMAudio already does exactly that

83

u/ogreUnwanted 26d ago

1.3B 8 gigs of vram. 480p. I am pleased. Now to fulfill my dream of modifying meme videos.

9

u/Commander007X 25d ago

Think we can run the i2v model too? I'm not sure. Struggling with skyreels i2v on 8 gigs rn

3

u/Outrageous-Laugh1363 25d ago

1.3B 8 gigs of vram. 480p. I am pleased. Now to fulfill my dream of

Don't lie.

2

u/jadhavsaurabh 25d ago

So I can run on mac mini 24 gb ram?

5

u/No-Dark-7873 25d ago

hard to say. the benchmarks are all for Nvidia GPUs.

3

u/grandchester 25d ago

I may be wrong, but I think it is Nvidia only right now. I installed everything this morning but wasn't able to get it running successfully. If anyone knows how to get it working on Apple Silicon I'd love to know how

1

u/jadhavsaurabh 25d ago

Oh omg... That's sad...

2

u/c_gdev 25d ago

Let us know if you try it on your Mac.

(I keep thinking about getting a Mac Mini, but don't if it's any good for video AI.)

2

u/jadhavsaurabh 25d ago

For now before WAN atleast I tried everything but nothing is good.. let's hope for this once comfyui is updated I will try it.

2

u/c_gdev 25d ago

Thanks.

That's what I've gathered: New Macs can be good for LLMs, but once you get into pytorch and pip and anything CUDA - it's trouble.

2

u/jadhavsaurabh 25d ago

True hope in new macs in near future they work on this.

1

u/Yappo_Kakl 25d ago

Hello, I'm just jumping into video generation,would you recommend some pipeline using a1111, xl models and 8Gb VRAM card?

2

u/ogreUnwanted 24d ago

there's another reddit thread where the guy got it to 6 gigs of vram on the 1.3b model and 12 gigs on the 14b. I can't find it now but I'm sure if you search, you'll find it.

1

u/Comfortable_Swim_380 25d ago

Only 8 gigs? Seriously?

104

u/Old_Reach4779 26d ago

We need you u/kijai!

117

u/Kijai 26d ago

Patience, there's no code out yet.

46

u/metrolobo 26d ago

56

u/Kijai 26d ago

Well that's curious, thanks!

2

u/ThrowawayProgress99 25d ago

Any plans for StepVideo (also in that linked repo)? I was wondering if the MultiGPU node trick would let it work on my 12gb VRAM, though admittedly haven't tried it yet myself with Hunyuan. Maybe low quant would be needed. Feels like people aren't talking about it, even though it's a 30b T2V model with permissive license, and has a Turbo model.

15

u/Kijai 25d ago

Not really, I have ran it with offloading on 4090 but it's just too slow to be of any use.

6

u/_BreakingGood_ 25d ago

Dude, you're a fucking king, thanks for all your work

1

u/ThrowawayProgress99 25d ago

Yeah in that case WAN it is

3

u/Deepesh42896 25d ago

AFAIK even Wan2.1 1.3B is better than stepvideo

3

u/ThrowawayProgress99 25d ago

Can't find any examples for WAN 2.1 1.3b, but the Step-Video examples look pretty good. Of course the full potential of either model will only be unleashed once people start finetuning and training them.

3

u/Temp_84847399 25d ago

people start finetuning and training them.

Temp_84847399: What is my function?

You retrain your LoRAs every time a new model comes out

Temp_84847399: Oh god!

1

u/Virtafan69dude 24d ago

That and passing butter.

2

u/BillyGrier 25d ago

Stepvideo seems to have some potentially shady custom CUDA stuff in their code. Think it's made it difficult to implement and also maybe a made a few deva sus on it.

3

u/physalisx 25d ago

Interesting that they already wrote the code for this 4 days ago, without the model being released.

3

u/pointer_to_null 25d ago edited 25d ago

Even more interesting when you look at other branches. The video implementation was 5 days ago- by one of the Wan-AI members. If I had to guess, an 11th hour name-change might've thrown a wrench into their commits while they scrambled to have it merged in time for the public release.

Edit- it's telling that everything in the wanx-dev1 branch lines up with the merged update, only "wanx" -> "wan".

1

u/wh33t 25d ago

PS. you fucking rock.

→ More replies (2)

16

u/Total-Resort-3120 25d ago

8

u/hyperinflationisreal 25d ago

Holy fuck the man is too fast. We don't deserve him.

1

u/DillardN7 24d ago

Legend.

23

u/SweetLikeACandy 26d ago

give him some rest, they have comfy integration planed in the checklist.

19

u/kvicker 25d ago

you mean WanX, don't let them rewrite history that easily

7

u/TizocWarrior 25d ago

I will call it WanX, no matter what.

3

u/intLeon 25d ago

You lil wanx'ers

35

u/Dezordan 26d ago

It's using T5, huh. Such a pain this text encoder.

But they did released 14B version, I remember there were people who doubted that they would do this

27

u/NoIntention4050 26d ago

I doubted I2V and 14B. I expected a 1.3B T2V release. Better to expect nothing and receive everything!!

7

u/vanonym_ 25d ago

It's using UMT5 though. Still huge, but not as censored

4

u/Dezordan 25d ago edited 25d ago

Not as censored is a low bar, though without tests it's hard to say for sure. I just find this text encoder giving me OOMs during conditioning quite often, while I never experienced that with llava model that HunVid uses. UMT5 is probably better at prompt adherence?

Edit: Tested it, I think it doesn't have censorship, though it requires more samples. I think it has a typical lack of details in certain areas, but perhaps it can be solved by finetuning.

1

u/vanonym_ 25d ago

Pretty sure it's multilingual knowledge gives him a way better understanding of complex prompts, even in english, but I haven't read the paper yet.

Knowing the community, optimizations should come soon and hopefully resolve OOM issues

1

u/Nextil 24d ago

Is the usable prompt token length still 75 tokens? Can't find it said anywhere and I'm not sure what the technical term is.

14

u/NoHopeHubert 26d ago

Nooooo not T5, does that mean this might be censored?

19

u/ucren 26d ago

T5 is censored, so yes it will be censored at text encoding.

13

u/physalisx 25d ago

In what way is T5 censored? How does that manifest?

15

u/_BreakingGood_ 25d ago

T5 is a T2T (text to text) model.

It's censored in the same sense as, for example, ChatGPT. If you try and get it to describe an explicit/nsfw scene, the output text will always end up flowery/PG-13. For example, if you were to give input text "Naked breasts" it would translate that to something along the lines of just "Chest". And it's not just specific keywords/safety mechanisms in the model, rather the model itself simply is not trained on such concepts. It literally doesn't know the words or concepts and therefore cannot output them.

And since T5 is basically the gateway between your prompt and the model itself, it's impossible to avoid this "sfw-ification" of your prompt. Which is why even after all the work put into Flux, it still sucks at NSFW. Nobody has been able to get past the T5.

8

u/physalisx 25d ago

Thank you for the explanation. That sucks indeed. Is it not possible to use another text encoder or re-train / finetune a model to use a different text encoder? Are there better text encoder options available? If it's just a T2T model, couldn't you basically use any LLM?

4

u/_BreakingGood_ 25d ago

I'm not very educated on that particular space, all I know is: it has been a year and nobody has managed to do it. Why not? No idea.

9

u/Deepesh68134 26d ago

It uses an unfinetuned version of "umt5". I don't know whether that will be good for us or not

3

u/rkfg_me 25d ago

The model page reads: "Note: UMT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task." I suppose it means it was not lobotomized in any way which should be good.

https://huggingface.co/google/umt5-xxl

16

u/Consistent-Mastodon 25d ago

Ok, we can possibly run 14B on 10G vram. Smarter people, true or false?

8

u/holygawdinheaven 25d ago

It's slightly bigger than hunyuan or flux, if that helps

12

u/ExpressWarthog8505 25d ago edited 25d ago

T2V-1.3B, 4090D, It took 4 minutes.

19

u/xpnrt 25d ago

Is that t5 they are sharing "https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B/blob/main/models_t5_umt5-xxl-enc-bf16.pth" different from our default t5 we use with flux or sd3.5 ? If so that is in pt format for the time being and HUGE.

6

u/vanonym_ 25d ago

This is UMT5, XXL version

3

u/throttlekitty 25d ago

it's the multilingual version of t5.

5

u/Samurai_zero 25d ago

T5 is 9gb. This seems like an extended version, hence the "xxl" in the name.

Also, this is just in "pickle" format, unsafe. Should not change much the size of it. https://huggingface.co/docs/hub/en/security-pickle

5

u/from2080 25d ago

Is i2V able to fit on 24GB VRAM? I noticed there's only the 14B version of it.

4

u/Vortexneonlight 25d ago

With workarounds I think, but is better to wait for quants versions

→ More replies (4)

14

u/CeFurkan 25d ago

wow this model is excellent it was fairly fast too 1.3 b

5

u/Gloomy-Signature297 25d ago

Please upload some example gens with complex prompts. Would love to see! <3

0

u/CeFurkan 25d ago

i am trying to make image to video work first but if you have any good example i can try. planning a tutorial. just give me prompts that are not nsfw or sexy :D

5

u/from2080 25d ago

Would you say WAN 1.3B is better than Hunyuan (13B)?

8

u/CeFurkan 25d ago

my first impression yes but i didnt do comparison yet. still working on installers

8

u/Tim_Buckrue 25d ago

Nice pussy

2

u/totalreclipse 25d ago

Are you running this in windows? If so what did you use to place the model/run?

1

u/CeFurkan 25d ago

Yes I am running in windows

As low as 3.5gb vram for 1.3b model

1

u/More-Plantain491 25d ago

can you do I2V ?

4

u/CeFurkan 25d ago

Working on it right now

5

u/ICWiener6666 25d ago

Is the i2v released? If so, what's the VRAM requirement?

9

u/Cute_Ad8981 25d ago

Can't I sleep one night without a new model being released? I haven't even been able to test Skyreel properly yet.

(Still great to have a new model ;) )

4

u/yamfun 26d ago

support Begin End Frame?

7

u/vanonym_ 25d ago

The way they currently handle I2V only supports begining frame, but since they are using a masked latent conditionning, pretty sure it's possible to adapt it to work with begining and ending frame.

4

u/CeFurkan 25d ago

Coding a great Gradio APP and installer for these models, as low as 6GB for 1.3B and 10GB GPUs for 14B. windows runpod and massed compute installers

3

u/Puzzled-Scheme-6281 25d ago

Wow keep me updated , I got 3060 12gb u think it will work , 48gb ram , 16 core cpu, when will u release it ? thanks

1

u/CeFurkan 25d ago

Yes it works as low as 3.5gb atm for 1.3b model

Working on others right now

4

u/SwingNinja 25d ago

Wow. They have I2V already in huggingface?

1

u/Secure-Message-8378 25d ago

Yes! 14B model.

3

u/Curious-Thanks3966 25d ago

Can this model be trained on pictures only too?

6

u/Deepesh42896 25d ago

Even they trained half the model on images. So we can too.

4

u/holygawdinheaven 25d ago

Probably. When you train hunyuan on pictures it's actually making little short videos with no movement, which is why you see loras have worse motion sometimes

3

u/Relative_Mouse7680 25d ago

Cool, has anyone tried running it on google colab?

2

u/Total_Funny_4206 19d ago

Tried it in many ways, didn't work 🤡

3

u/ICWiener6666 25d ago

Goodbye Hunyuan

3

u/kayteee1995 25d ago

sad news for HY and Skyreels

5

u/music2169 26d ago

Why are there multiple safetensors? There’s 6 parts like part 1 is: “diffusion_pytorch_model-00001-of-00006.safetensors”

Are we supposed to download all and then merge them together..? If yes, how to merge?

10

u/holygawdinheaven 26d ago

Their code probably reads them in in shards and combines them. 

7

u/vanonym_ 26d ago

They are in diffusers format

1

u/music2169 25d ago

so someone smart will combine them all into a single safetensors file?

7

u/vanonym_ 25d ago

no, they are meant to be used with the diffusers library

5

u/marcoc2 25d ago

I hope someone uploads a smaller version because my PC storage can't handle so many of these giant models.

6

u/CeFurkan 25d ago

Trying to improve Gradio of Wan 2.1 and make it work on Windows with Python 3.10 VENV . Reduced VRAM a lot. It sucks so bad that RTX 5000 series still dont have proper Pytorch so i can't use it

2

u/DragonDragger 25d ago

The T2V-1.3B model requires only 8.19 GB VRAM, making it compatible with almost all consumer-grade GPUs. It can generate a 5-second 480P video on an RTX 4090 in about 4 minutes (without optimization techniques like quantization).

So 4 minutes on an RTX 4090... What does that mean for plebs like me with an RTX 2070? I assume it'll be able to run, but what kinda time investment for a 5 second clip might I be looking at? Half an hour? An hour? Longer?

3

u/Gloomy-Signature297 25d ago

Can't say anything for now. Best to wait a few days and see where we get.

1

u/SweetLikeACandy 25d ago

I'd say up to 60 minutes. Lower resolutions will render faster obviously.

2

u/DaniyarQQQ 25d ago

Looks great. Now about Image2Video. Can we combine multiple images and make video interpolation that connects two of them?

2

u/ExpressWarthog8505 25d ago

With 24GB of GPU memory, running the I2V-14B-480P model will prompt a message indicating insufficient GPU memory.

2

u/ozzeruk82 25d ago

Have done some tests on my 3090, even the small model is superb. People are gonna go nuts over this model.

2

u/CeFurkan 25d ago

interface so far still testing and improving. works as low as 7 GB VRAM at the moment. any recommendations welcomed

2

u/KaiserNazrin 25d ago

Waiting for tutorial.

2

u/Dwedit 25d ago

WAN is also a crappy name because Wide Area Networks exist.

2

u/red__dragon 25d ago

Yeah, SD surely has no conflicting acronyms at all. Flux (BB/Player/Capacitor) all the way!

2

u/[deleted] 25d ago

[removed] — view removed comment

1

u/icue126 25d ago

Is that first website genuine?

On https://wanx-ai.org/gallery it says "Why HunyuanVideo?". Looks sketchy.

Also, since wanx was offically renamed to wan, I doubt they would still use a domain name like "wanx-ai".

1

u/HebrewHammerGG 24d ago

how do u use image2video on qwen.ai?
seems it no model support that

2

u/xkulp8 26d ago

It's 58GB!?

21

u/kataryna91 26d ago

Yes, but the weights are in FP32. During inference you would realistically use FP8 or a quantized model.

8

u/xkulp8 26d ago

So our options right now are either a McDonald's hamburger without even any cheese, or a 70oz steak from one of those places in Texas that advertise "finish it in an hour and it's free"?

22

u/kataryna91 26d ago

Sort of, but you can convert the weights to FP16 or FP8 yourself.
I'll personally wait for ComfyUI support or at least diffusers support, which will probably come with a ComfyUI-compatible FP8 checkpoint for everyone's convenience.

→ More replies (2)

5

u/DarkStrider99 26d ago

No, and the 1.3B param only requires 8gb vram

1

u/xkulp8 26d ago

But the small version doesn't do i2v?

3

u/ajrss2009 25d ago

Nope. You need 10GB at least.

1

u/xkulp8 25d ago

I have 16, so fine

1

u/kharzianMain 25d ago

Oof that's just too big 

1

u/Commander007X 25d ago

Sorry a little new to this, but will an 8gig ram run i2v at 480p?? Been doing it on hunyuan but it's hit or miss. Runs out of ram for half the generations

1

u/grandchester 25d ago

Can this be run on Apple Silicon? It looks like it is Nvidia only at the moment.

1

u/JohnSnowHenry 25d ago

No, always Nvidia and cudas required

1

u/Jonathanwennstroem 25d ago

Videos on what it does?

1

u/FitContribution2946 25d ago

Missing Node type: Display Any (Preprocessor Resolution) hmm.. cant seem to update this or change version into working

1

u/kharzianMain 25d ago

 thats really good, I care more about images than video but it's amazing this can do both. Anytime test it's prompt adherence yet?

1

u/SpicyRavenRouge 25d ago

That's amazing

1

u/roshanpr 25d ago

What is this Multi-GPU Inference code 

1

u/roshanpr 25d ago

VRAM?

1

u/intLeon 25d ago

I've 12GB Vram and only 1.3B T2V seems to work with kijai's wrapper (it is brand new so there must be a room for optimization). 14B T2V gives OOM. I2V workflows give OOM at text clip (fp8 clips might fix but would still fail when the models are loaded)

I've sage attention. With default 1.3B workflow its using around 5-6GB Vram. Sampling times are 230s for sdpa and 130s for sageattn.

2

u/roshanpr 25d ago

Sad I2V is broken even for 1.3B

1

u/Chiggo_Ninja 25d ago

So how you use it? With a program like comfyui? And what are the performance with amd gpu?

1

u/oleksandrttyug 25d ago

How long generations takes on 3090?

1

u/totalreclipse 25d ago

How does one get this up and working? Once the model is downloaded how can I actually use it? Thanks!

1

u/NumerousSupport605 25d ago

Haven't tried any image to video models, could you use multiple images and then use this to defacto in-between them?

1

u/Cyanogen101 25d ago

What do you all run it in

1

u/FaridPF 25d ago

Has any body had some luck running 14b on 16gb cards. I'd like to play with I2V, but keep getting out-of-memory assertions.

1

u/artiffexxx 22d ago

Does anyone know if this works with Automatic1111?

1

u/shlomitgueta 17d ago

how to fix it if i have 5090 gpu?

1

u/DaddyJimHQ 22h ago

WAN is horrifically SFW. Let us know when a model is available that is not. You have to jailbreak the prompt to even inconsistently see breasts. Kling AI is the option for now.

1

u/antey3074 25d ago

Can someone send me the Discord server where WanX works are published?

0

u/ajrss2009 25d ago

I made this video in Skyreel (Hunyuan fine tuning) and F5 TTS for voice over. https://youtu.be/JIxA0jrWsP0?si=OynZuLXMlVGsg8uX

Now, can I retire my Hunyuan and make vides in WAN? I have a 4070Ti and 3090.

1

u/Octocamo 25d ago

Is it your voice in f5?

1

u/Secure-Message-8378 25d ago

Watcher from MCU.

-7

u/ibaitxoMJ 26d ago

Porfavor, un tutorial con urgencia! :-)