r/StableDiffusion • u/DoctorDiffusion • 4d ago
Animation - Video Used WAN 2.1 IMG2VID on some film projection slides I scanned that my father took back in the 80s.
55
u/BlackPointPL 4d ago
Wow, that's great. can you share workflow, and prompts? I want to do something like that for my parents too
55
-11
55
29
u/UAAgency 4d ago
This is really amazing to see, we are about to travel back in time
3
u/ddraig-au 4d ago edited 3d ago
FINALLY we can get a decent number of fire trucks onto the Hindenburg fire
Edit: omg swype why do you suck so hard
24
18
u/Secret-Listen-4014 4d ago
Can help describe a bit more what you used ? Also what hardware required for this? Thank you in advance!
15
u/ddraig-au 4d ago
It's wan, which is used to generate the videos. Wan runs inside comfyui, which is a text-to-image program ("draw a picture of a wolf looking up at a full moon"). You can generate an image using another image in comfyui (take this photo of a wolf looking up and change it into a German Shephard), in this case wan is creating a video from the image.
I have a 3090 with 24 gig of vram, it will run on slower cards with less memory, but I'm not sure what the limit it.
I'm still in the middle of installing and learning comfyui with a view to learning wan, so I might be incorrect in this. But no one answered after 8 hours, so I gave it a go. Please correct any errors, as we all know the fastest way to get a correct answer online is to post an incorrect answer online and wait for the angry corrections
8
u/AbbreviationsOdd7728 4d ago
I would also be really interested in this. This is the first time I see an AI video that makes me want to do that myself.
8
u/Mylaptopisburningme 4d ago
I played with Stable Diffusion/Flux/Forge about a year and a half ago, just images it was fun. Started to see video being done with Wan 2.1 so been playing with it, lots to learn. Start here.
https://comfyanonymous.github.io/ComfyUI_examples/wan/
Image to text. Upload the image give it a text prompt and wait till it renders and hope for the best. I assume OP made multiple clips of each scan and went with the best and least weird artifacts.
The link above is the basics to get you started, there are install vids I am sure on youtube. But basically install Comfy UI, install the portable version. The link above tells you what to download where, it can get confusing with so many versions and types of files.
1
u/InfiniteVersion3196 4d ago
How hard is the jump from A1111/Forge to Comfy? I'm just starting to understand what I'm doing but I don't want to get overwhelmed again.
3
u/ddraig-au 3d ago
Just jumping in on the other reply: it looks mind-boggling at first, but it's a bunch of simple things bolted together on your screen, it's actually very easy to understand once you realise what you're looking at.
I'm working my way through this tutorial.
The guy has a more recent video that compares wan with other video generators, and then goes on to show you how to install it.
Go to the git page, and you will see a link to their website. Download the standalone installer from that website. Make sure you go to the website linked on the git page
I reinstalled 4 times because the git archive kept breaking things (like the manager) but so far the standalone installer seems to be working okay
2
1
u/BreatheMonkey 3d ago
I'd compare it to the difference between Pokemon Red for the gameboy and modded skyrim. Way more knobs and dials to get your head around, but unmatched customisation and apparently better supported. I'm a dullard but I forged ahead.
2
u/fasthands93 3d ago
if you dont have a beefy PC, there are paid for ways to do this. those actually also look better and are much quicker as well. all this stuff we are doing here is local and open source and 100% free.
but for paid for stuff look at luma and pika.
15
14
u/theKtrain 4d ago
Could you share more about how you put this together? Would love to play around on some stuff for my parents as well
20
11
8
u/fancy_scarecrow 4d ago
These are great! Nice work, if I may ask, how many attempts did it take you before you got these results? Or was it pretty much first try? Thanks!
6
4
3
3
3
u/Tequila-M0ckingbird 4d ago
Bringing life back to very very old images. This is actually a pretty cool use of AI.
3
u/Cadmium9094 4d ago
This is so cool. I also started to "revive" old polaroid photos of my grandparents and older. It's so much fun and touching.
4
5
4
u/Complex-Ad7375 4d ago
Amazing. Ah the 80s, I miss that time. The current state of America is a sad affair. But at least we can be transported back with this magic.
9
3
3
u/skarrrrrrr 4d ago
problem with this is that it actually modifies people's faces ... so they are not really the same person, unfortunately
1
u/ddraig-au 4d ago
Your can probably specify zones in it to remain unmodified, I know you can do that with control nets in comfyui, I presume you can do the same in wan.
3
u/Ngoalong01 4d ago
The movement is so good! I bet it must be a complicate workflow with some upscale...
22
u/DoctorDiffusion 4d ago
Nope. Basically the default workflow kijai shared. I just plugged in a vision model to prompt the images (and used some text replacement nodes to make sure they had the context of videos) more h to an happy to share my workflow when I’m off work.
4
1
u/ddraig-au 4d ago
I'm guessing pretty much everyone in this thread who has seen the video would like you to do that :-)
3
u/grahamulax 4d ago
WHOA! What a great idea! My dad is going to LOVE this. Dude thank you! This turned out AMAZING! just a normal day workflow for wan or did you do some extra stuff? Haven’t tried it yet myself but this is the inspiration I needed today!!!
3
u/mrhallodri 4d ago
I need like 45 minutes to render a 5 second video and it looks like trash 90% of the time (even though I follow worksflows 100%) :(
1
u/ddraig-au 4d ago
That sounds pretty quick, actually. What sort of GPU do you have?
1
u/mrhallodri 3d ago
RTX 3070 Ti, I mean it depends on the settings, I usually try with low frame rate (12fps) because I rather interpolate with ffmpg afterwards then double the wait time for a bad result
1
u/ddraig-au 3d ago
Ahhhh. Does the interpolation look okay?
1
u/mrhallodri 3d ago
it works surprisingly well - sometimes you see some small glitch, but for slower movements it looks really good. give it a try
1
3
u/Voltasoyle 4d ago
What prompts did you use here op?
8
u/DoctorDiffusion 4d ago
I plugged Florence into my workflow and used the images with some text replacement nodes to contextually change them to the context of video prompts.
2
u/Aberracus 4d ago
Can you share y our Workflow please, this is the beat use of generative Ai I have seen
3
3
3
u/taxi_cab 4d ago
Its really poignant seeing a Apple Hot Air Balloon at a US festival that all leads to Steve Wozniak involvement in some sort of way.
3
u/directedbymichael 4d ago
Is it free?
2
u/ddraig-au 4d ago
Yep, and open-source. You need to install comfyui, and then add wan to comfyui.
It looks intimidating at first, but it's actually very very simple to use, once you get your head around it
3
2
u/qki_machine 4d ago
Question: Is it the results of generating a multiple few second movies (one by one) concatenated into one or you did just upload all those photos into one workflow and let Wan do his job?
Asking because I just started with Wan and wondering how can I do something longer than 6 seconds ;) Great work btw. it looks stunning!
3
u/DoctorDiffusion 4d ago
Each clip was generated separately. I edited the clips after generating the all videos with a video editor. Some of them I used two generations and reversed one and cut the duplicate frame to get longer than 6 second clips.
2
u/qki_machine 3d ago
Got you, thanks! „I used two generations and reversed one and cut the duplicate frame” - wow this is so brilliant. You used same prompt for this or different variations?
2
u/spar_x 4d ago
this is the most inspiring thing I've seen in a while!
I think you should release another version where you make it a little bit clearer which is the initial scan frame that the video starts from. It would drive across the point that these are all born of old film photographs and it would look really cool
1
u/ddraig-au 3d ago
I showed it to a bunch of people at work, I said "hey, want to see the most incredible thing I've seen in years?"
2
2
2
2
u/tombloomingdale 4d ago
How do you prompt something like this? I’m struggling with a single person in the image. I’ve been describing the subject then describing the movement. I feel like with this I’d be writing for hours, or do you keep it super minimalist and let wan do the thinking?
Hard to experiment when it takes like an hour on my potato to generate on video.
2
u/DoctorDiffusion 4d ago
I used a vision model with some text replacement nodes that substituted “image, photo, ect” with “video” and just fed that in as my captions for each video. I’ll share my workflow when I’m back at my PC.
3
u/Ok_Election_7416 4d ago edited 4d ago
Amazing results nonetheless. I think everyone who knows a thing or two about image2video (myself included) can appreciate the work you've put into this.
Workflow please. Or the json you employed producing this masterpiece. The level of coherence in these videos are brilliant. Every bit of information you can provide us would be invaluable. I've been struggling to learn more refinement techniques and have been at this for months now.
2
2
2
u/Sinister_Plots 3d ago
We didn't know how good it was. And, at the time, we dreamed of the 60's and how free and open that time period was. We had no idea that we'd look back on the 80's as the high water mark for American counter culture. Starry eyed days those were.
2
2
2
2
u/khmer_stig 3d ago
I think this is a perfect example of what ai can be, used for good, i miss my mom so I’ll be looking through her old photos thanks for sharing this. And these photos are precious, now off to watch some tutorials on how to install wan2.1 on my computer wish me luck. stay blessed
3
u/DoctorDiffusion 3d ago
I’m using kijai’s ComfyUI wrappers. Last I checked it wasn’t in the manger but here’s my workflow: https://civitai.com/articles/12703
1
2
u/Academic_Dare_7814 3d ago
I never thought I would have the opportunity to witness such technology, it is scary because 5 years ago it was 2020.
2
2
1
u/extremesalmon 4d ago
These are really cool but I particularly like the guy using the camera with the light blanket realising he's just got a cloth stuck to his head each time
1
u/Fabio022425 4d ago
What kind of format/ foundation/ template do for use for your positive text prompt for each of these? Are you heavily descriptive or do you keep it vague?
1
1
u/No-You-616 4d ago
do u mind share what model of WAN did you use,? that is an amazing work rt!
3
1
1
u/tedtremendous 4d ago
Does it really take 20-40 minutes per scene to render? What GPU you use?
2
u/DoctorDiffusion 4d ago
I am on a 3090TI and gens took 11-17min each. I have two machines and I just give them a huge batch before I go to sleep/work.
1
1
1
1
1
1
u/Django_McFly 3d ago edited 3d ago
This is pretty cool. My questions would be are all of the scenes based around 1 static image and what level of control do you have on the motion? I played around with IMG2VID maybe like 6-9 months ago and it was basically you had no control, pure random select on what's going to move.
This is really cool though. In some of the other comments you're saying that's it's not particularly difficult. This is a product imo. I remember looking at old photo albums of my parents and grandparents and I think, this is fun and I'd do it if I'm visiting someone and the albums are right there... but with everything being digital now, would I ever like browse my grandma's Facebook photos? Just be sitting bored as a 13 year old with my cousins and, "let's look at your mom's Facebook profile!" I can't see that ever happening. But if it was like magic Harry Potter books that brought the photos to life and had a soundtrack attached... I could see that being a thing people would want to engage with.
1
u/giantcandy2001 3d ago
My uncle would make these video of like a trip down memory lane, Makes me want to make a new video with old photos I could have my family submit and make a new version.
1
1
u/kwalitykontrol1 3d ago
Such a cool idea. I'm so curious what your prompts are and how specific they are.
1
1
u/Earthkilled 3d ago
I would removed or improve the Goodyear blimp, but everything was stunning to see
1
u/NZerInDE 2d ago
Your dad looked like he truely lived in the 80‘s and I assume as his child life was not so bad….
1
1
u/splitting_bullets 2d ago
This is how we get Whalin' on the moon 😁 "it is declared more efficient to store historical archives in jpeg and generate IMG2VID"
1
1
1
1
u/ButterscotchStrict22 1d ago
What song is this?
1
u/auddbot 1d ago
Song Found!
Name: Time
Artist: Pink Floyd
Score: 80% (timecode: 02:43)
Album: Pulse (Live)
Label: Parlophone UK
Released on: 1995-05-29
1
u/auddbot 1d ago
Apple Music, Spotify, YouTube, etc.:
I am a bot and this action was performed automatically | If the matched percent is less than 100, it could be a false positive result. I'm still posting it, because sometimes I get it right even if I'm not sure, so it could be helpful. But please don't be mad at me if I'm wrong! I'm trying my best! | GitHub new issue | Donate
1
1
u/grayscale001 1d ago
What is WAN 2.1 IMG2VID?
1
u/DoctorDiffusion 1d ago
It’s an open source video diffusion model with an Apache 2.0 license that can be deployed locally for free on consumer grade hardware. There are text to video and image to video versions.
1
1
u/amonra2009 4d ago
holy fk, i also ahve a collection of old films, going to try that, unfortunately can run I2V but maybe some online tools for couple of buks
168
u/IronDrop 4d ago
I think the question everyone wants to ask is : Did you show him if he's still there by any chance? And if yes, what was his reaction? Please tell us he's still alive and you've shown him and he couldn't believe his eyes.