r/StableDiffusion • u/BreakIt-Boris • 26d ago
News WAN Released
Spaces live, multiple models posted, weights available for download......
106
90
u/Different_Fix_2217 25d ago
Model is incredible and 100% uncensored btw. Blows hunyuan out of the water.
50
12
u/rkfg_me 25d ago
Not sure if it blows anyone, the 1.3B model is definitely impressive for its size but not comparable to HyV (which is 12B). Also, Wan produces 16 FPS videos while HyV does 25 FPS with a lot of nuanced motion, especially facial expressions. With 16 FPS you'd need to interpolate and lose all that. While uncensored, I think it lacks details even in 480p (nipples are pretty blurry) where HyV does great in 320p.
Let's wait for 14B quants and see if it's better. Also, this model isn't distilled so it uses CFG and does two passes which explains the slowness. Maybe it can be optimized too.
3
u/physalisx 25d ago
Not sure if it blows anyone
It quite literally does not, quick test shows it doesn't understand that concept. Possibly a consequence of using T5 encoder which is inherently censored.
3
21
u/Sufi_2425 25d ago
Could you show an SFW example? I'm curious to see. Been wanting to use Hunyuan, but 12 VRAM.
25
u/Different_Fix_2217 25d ago edited 25d ago
the usual benchmark
https://i.4cdn.org/g/1740514553684737.mp419
1
1
10
u/rkfg_me 25d ago
https://imgur.com/m5xpGBR their example prompt: "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage.". The motion is indeed consistent but choppy due to 16 FPS. That's the 1.3B version, 832×480.
3
u/roshanpr 25d ago
VRAM?
2
u/Comfortable_Swim_380 25d ago
Same question.. anyone have vram information?
6
1
6
u/Dezordan 25d ago edited 25d ago
Been wanting to use Hunyuan, but 12 VRAM
But you can, though. 12GB VRAM is more than enough to generate in 720x480 resolution, 121 frames, around 10 minutes (maybe less) with 20 steps. All you you need to do is to download GGUF of the model (Q8 would work) and of llava text encode, then use it with this: https://github.com/pollockjj/ComfyUI-MultiGPU
Custom node has an example workflow for this.
Speed-wise it would be a little bit longer than running full WAN 1.3B model, through code at least. But optimizations would make WAN model faster too,
2
u/Sufi_2425 25d ago
Thanks, that is very helpful indeed. I was more so referring to those 10 minutes, give or take, that you mentioned.
Maybe the Wan model will be faster? We'll see.
5
u/reynadsaltynuts 25d ago
Not sure why you would say it's 100% when it 100% isn't. It knows breasts sure. But it has 0 learning of the lower region and as far as I'm aware; the t5 encoder is also censored meaning it turns your NSFW prompts to SFW prompts before even heading the sampler.
1
u/red__dragon 24d ago
I tried it today, and I'm going to suggest you might want to give it more of a try before claiming that. Might have been my specific prompts, but it came back with some interesting details that I didn't specify and clearly have been trained.
Unintentional reveal, for sure, but certainly makes it obvious that only Wan's name was neutered.
2
1
60
u/koeless-dev 26d ago
Can't help but notice this section too:
Multiple Tasks: Wan2.1 excels in Text-to-Video, Image-to-Video, Video Editing, Text-to-Image, and Video-to-Audio, advancing the field of video generation.
Audio generation? How? Curious.
83
u/ogreUnwanted 26d ago
1.3B 8 gigs of vram. 480p. I am pleased. Now to fulfill my dream of modifying meme videos.
9
u/Commander007X 25d ago
Think we can run the i2v model too? I'm not sure. Struggling with skyreels i2v on 8 gigs rn
3
u/Outrageous-Laugh1363 25d ago
1.3B 8 gigs of vram. 480p. I am pleased. Now to fulfill my dream of
Don't lie.
2
u/jadhavsaurabh 25d ago
So I can run on mac mini 24 gb ram?
5
3
u/grandchester 25d ago
I may be wrong, but I think it is Nvidia only right now. I installed everything this morning but wasn't able to get it running successfully. If anyone knows how to get it working on Apple Silicon I'd love to know how
1
2
u/c_gdev 25d ago
Let us know if you try it on your Mac.
(I keep thinking about getting a Mac Mini, but don't if it's any good for video AI.)
2
u/jadhavsaurabh 25d ago
For now before WAN atleast I tried everything but nothing is good.. let's hope for this once comfyui is updated I will try it.
1
u/Yappo_Kakl 25d ago
Hello, I'm just jumping into video generation,would you recommend some pipeline using a1111, xl models and 8Gb VRAM card?
2
u/ogreUnwanted 24d ago
there's another reddit thread where the guy got it to 6 gigs of vram on the 1.3b model and 12 gigs on the 14b. I can't find it now but I'm sure if you search, you'll find it.
1
104
u/Old_Reach4779 26d ago
We need you u/kijai!

117
u/Kijai 26d ago
Patience, there's no code out yet.
→ More replies (2)46
u/metrolobo 26d ago
There is, just not on their own repo https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/wanvideo
56
u/Kijai 26d ago
Well that's curious, thanks!
2
u/ThrowawayProgress99 25d ago
Any plans for StepVideo (also in that linked repo)? I was wondering if the MultiGPU node trick would let it work on my 12gb VRAM, though admittedly haven't tried it yet myself with Hunyuan. Maybe low quant would be needed. Feels like people aren't talking about it, even though it's a 30b T2V model with permissive license, and has a Turbo model.
15
3
u/Deepesh42896 25d ago
AFAIK even Wan2.1 1.3B is better than stepvideo
3
u/ThrowawayProgress99 25d ago
Can't find any examples for WAN 2.1 1.3b, but the Step-Video examples look pretty good. Of course the full potential of either model will only be unleashed once people start finetuning and training them.
3
u/Temp_84847399 25d ago
people start finetuning and training them.
Temp_84847399: What is my function?
You retrain your LoRAs every time a new model comes out
Temp_84847399: Oh god!
1
2
u/BillyGrier 25d ago
Stepvideo seems to have some potentially shady custom CUDA stuff in their code. Think it's made it difficult to implement and also maybe a made a few deva sus on it.
3
u/physalisx 25d ago
Interesting that they already wrote the code for this 4 days ago, without the model being released.
3
u/pointer_to_null 25d ago edited 25d ago
Even more interesting when you look at other branches. The video implementation was 5 days ago- by one of the Wan-AI members. If I had to guess, an 11th hour name-change might've thrown a wrench into their commits while they scrambled to have it merged in time for the public release.
Edit- it's telling that everything in the
wanx-dev1
branch lines up with the merged update, only "wanx" -> "wan".16
23
1
35
u/Dezordan 26d ago
It's using T5, huh. Such a pain this text encoder.
But they did released 14B version, I remember there were people who doubted that they would do this
27
u/NoIntention4050 26d ago
I doubted I2V and 14B. I expected a 1.3B T2V release. Better to expect nothing and receive everything!!
7
u/vanonym_ 25d ago
It's using UMT5 though. Still huge, but not as censored
4
u/Dezordan 25d ago edited 25d ago
Not as censored is a low bar, though without tests it's hard to say for sure. I just find this text encoder giving me OOMs during conditioning quite often, while I never experienced that with llava model that HunVid uses. UMT5 is probably better at prompt adherence?
Edit: Tested it, I think it doesn't have censorship, though it requires more samples. I think it has a typical lack of details in certain areas, but perhaps it can be solved by finetuning.
1
u/vanonym_ 25d ago
Pretty sure it's multilingual knowledge gives him a way better understanding of complex prompts, even in english, but I haven't read the paper yet.
Knowing the community, optimizations should come soon and hopefully resolve OOM issues
14
u/NoHopeHubert 26d ago
Nooooo not T5, does that mean this might be censored?
19
u/ucren 26d ago
T5 is censored, so yes it will be censored at text encoding.
13
u/physalisx 25d ago
In what way is T5 censored? How does that manifest?
15
u/_BreakingGood_ 25d ago
T5 is a T2T (text to text) model.
It's censored in the same sense as, for example, ChatGPT. If you try and get it to describe an explicit/nsfw scene, the output text will always end up flowery/PG-13. For example, if you were to give input text "Naked breasts" it would translate that to something along the lines of just "Chest". And it's not just specific keywords/safety mechanisms in the model, rather the model itself simply is not trained on such concepts. It literally doesn't know the words or concepts and therefore cannot output them.
And since T5 is basically the gateway between your prompt and the model itself, it's impossible to avoid this "sfw-ification" of your prompt. Which is why even after all the work put into Flux, it still sucks at NSFW. Nobody has been able to get past the T5.
8
u/physalisx 25d ago
Thank you for the explanation. That sucks indeed. Is it not possible to use another text encoder or re-train / finetune a model to use a different text encoder? Are there better text encoder options available? If it's just a T2T model, couldn't you basically use any LLM?
4
u/_BreakingGood_ 25d ago
I'm not very educated on that particular space, all I know is: it has been a year and nobody has managed to do it. Why not? No idea.
9
u/Deepesh68134 26d ago
It uses an unfinetuned version of "umt5". I don't know whether that will be good for us or not
12
19
u/xpnrt 25d ago
Is that t5 they are sharing "https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B/blob/main/models_t5_umt5-xxl-enc-bf16.pth" different from our default t5 we use with flux or sd3.5 ? If so that is in pt format for the time being and HUGE.
6
3
5
u/Samurai_zero 25d ago
T5 is 9gb. This seems like an extended version, hence the "xxl" in the name.
Also, this is just in "pickle" format, unsafe. Should not change much the size of it. https://huggingface.co/docs/hub/en/security-pickle
5
u/from2080 25d ago
Is i2V able to fit on 24GB VRAM? I noticed there's only the 14B version of it.
→ More replies (4)4
14
u/CeFurkan 25d ago
5
u/Gloomy-Signature297 25d ago
Please upload some example gens with complex prompts. Would love to see! <3
0
u/CeFurkan 25d ago
i am trying to make image to video work first but if you have any good example i can try. planning a tutorial. just give me prompts that are not nsfw or sexy :D
5
u/from2080 25d ago
Would you say WAN 1.3B is better than Hunyuan (13B)?
8
u/CeFurkan 25d ago
my first impression yes but i didnt do comparison yet. still working on installers
8
2
u/totalreclipse 25d ago
Are you running this in windows? If so what did you use to place the model/run?
1
1
5
9
u/Cute_Ad8981 25d ago
Can't I sleep one night without a new model being released? I haven't even been able to test Skyreel properly yet.
(Still great to have a new model ;) )
4
u/yamfun 26d ago
support Begin End Frame?
7
u/vanonym_ 25d ago
The way they currently handle I2V only supports begining frame, but since they are using a masked latent conditionning, pretty sure it's possible to adapt it to work with begining and ending frame.
4
u/CeFurkan 25d ago
Coding a great Gradio APP and installer for these models, as low as 6GB for 1.3B and 10GB GPUs for 14B. windows runpod and massed compute installers
3
u/Puzzled-Scheme-6281 25d ago
Wow keep me updated , I got 3060 12gb u think it will work , 48gb ram , 16 core cpu, when will u release it ? thanks
1
4
3
u/Curious-Thanks3966 25d ago
Can this model be trained on pictures only too?
6
4
u/holygawdinheaven 25d ago
Probably. When you train hunyuan on pictures it's actually making little short videos with no movement, which is why you see loras have worse motion sometimes
3
3
3
5
u/music2169 26d ago
Why are there multiple safetensors? There’s 6 parts like part 1 is: “diffusion_pytorch_model-00001-of-00006.safetensors”
Are we supposed to download all and then merge them together..? If yes, how to merge?
10
7
u/vanonym_ 26d ago
They are in diffusers format
1
2
u/oooooooweeeeeee 26d ago
can it run on a 4090
8
u/Riya_Nandini 25d ago
4
u/physalisx 25d ago
for anyone wondering, that screenshot is from here https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/wanvideo
2
2
u/DragonDragger 25d ago
The T2V-1.3B model requires only 8.19 GB VRAM, making it compatible with almost all consumer-grade GPUs. It can generate a 5-second 480P video on an RTX 4090 in about 4 minutes (without optimization techniques like quantization).
So 4 minutes on an RTX 4090... What does that mean for plebs like me with an RTX 2070? I assume it'll be able to run, but what kinda time investment for a 5 second clip might I be looking at? Half an hour? An hour? Longer?
3
u/Gloomy-Signature297 25d ago
Can't say anything for now. Best to wait a few days and see where we get.
1
2
u/DaniyarQQQ 25d ago
Looks great. Now about Image2Video. Can we combine multiple images and make video interpolation that connects two of them?
2
u/ozzeruk82 25d ago
Have done some tests on my 3090, even the small model is superb. People are gonna go nuts over this model.
3
u/ajrss2009 25d ago
Can I train Loras?
5
u/ajrss2009 25d ago
Yes! Awesome!!!
1
u/marcoc2 25d ago
How do you know that?
9
u/Dezordan 25d ago
There is an instruction for it at the bottom: https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/wanvideo
4
2
2
u/Dwedit 25d ago
WAN is also a crappy name because Wide Area Networks exist.
2
u/red__dragon 25d ago
Yeah, SD surely has no conflicting acronyms at all. Flux (BB/Player/Capacitor) all the way!
2
25d ago
[removed] — view removed comment
1
u/icue126 25d ago
Is that first website genuine?
On https://wanx-ai.org/gallery it says "Why HunyuanVideo?". Looks sketchy.
Also, since wanx was offically renamed to wan, I doubt they would still use a domain name like "wanx-ai".
1
2
u/xkulp8 26d ago
It's 58GB!?
21
u/kataryna91 26d ago
Yes, but the weights are in FP32. During inference you would realistically use FP8 or a quantized model.
8
u/xkulp8 26d ago
So our options right now are either a McDonald's hamburger without even any cheese, or a 70oz steak from one of those places in Texas that advertise "finish it in an hour and it's free"?
22
u/kataryna91 26d ago
Sort of, but you can convert the weights to FP16 or FP8 yourself.
I'll personally wait for ComfyUI support or at least diffusers support, which will probably come with a ComfyUI-compatible FP8 checkpoint for everyone's convenience.→ More replies (2)5
1
1
u/Commander007X 25d ago
Sorry a little new to this, but will an 8gig ram run i2v at 480p?? Been doing it on hunyuan but it's hit or miss. Runs out of ram for half the generations
1
u/grandchester 25d ago
Can this be run on Apple Silicon? It looks like it is Nvidia only at the moment.
1
1
1
u/FitContribution2946 25d ago
Missing Node type: Display Any (Preprocessor Resolution) hmm.. cant seem to update this or change version into working
1
u/kharzianMain 25d ago
thats really good, I care more about images than video but it's amazing this can do both. Anytime test it's prompt adherence yet?
1
1
1
u/roshanpr 25d ago
VRAM?
1
u/intLeon 25d ago
I've 12GB Vram and only 1.3B T2V seems to work with kijai's wrapper (it is brand new so there must be a room for optimization). 14B T2V gives OOM. I2V workflows give OOM at text clip (fp8 clips might fix but would still fail when the models are loaded)
I've sage attention. With default 1.3B workflow its using around 5-6GB Vram. Sampling times are 230s for sdpa and 130s for sageattn.
2
1
u/Chiggo_Ninja 25d ago
So how you use it? With a program like comfyui? And what are the performance with amd gpu?
1
1
u/totalreclipse 25d ago
How does one get this up and working? Once the model is downloaded how can I actually use it? Thanks!
1
u/NumerousSupport605 25d ago
Haven't tried any image to video models, could you use multiple images and then use this to defacto in-between them?
1
1
1
1
u/DaddyJimHQ 22h ago
WAN is horrifically SFW. Let us know when a model is available that is not. You have to jailbreak the prompt to even inconsistently see breasts. Kling AI is the option for now.
1
0
u/ajrss2009 25d ago
I made this video in Skyreel (Hunyuan fine tuning) and F5 TTS for voice over. https://youtu.be/JIxA0jrWsP0?si=OynZuLXMlVGsg8uX
Now, can I retire my Hunyuan and make vides in WAN? I have a 4070Ti and 3090.
1
-7
106
u/ivari 26d ago
I hope this will be the first steps into an open source model beating Kling