this post was submitted on 21 Nov 2023
59 points (92.8% liked)

Stable Diffusion

4326 readers
46 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

Abstract

We present Stable Video Diffusion — a latent video diffusion model for high-resolution, state-of-the-art text-to-video and image-to-video generation. Recently, latent diffusion models trained for 2D image synthesis have been turned into generative video models by inserting temporal layers and finetuning them on small, high-quality video datasets. However, training methods in the literature vary widely, and the field has yet to agree on a unified strategy for curating video data. In this paper, we identify and evaluate three different stages for successful training of video LDMs: text-to-image pretraining, video pretraining, and high-quality video finetuning. Furthermore, we demonstrate the necessity of a well curated pretraining dataset for generating high-quality videos and present a systematic curation process to train a strong base model, including captioning and filtering strategies. We then explore the impact of finetuning our base model on high-quality data and train a text-to-video model that is competitive with closed-source video generation. We also show that our basemodel provides a powerful motion representation for downstream tasks such as image-to-video generation and adaptability to camera motion-specific LoRA modules. Finally, we demonstrate that our model provides a strong multi-view 3D-prior and can serve as a base to finetune a multi-view diffusion model that jointly generates multiple views of objects in a feedforward fashion, outperforming image-based methods at a fraction of their compute budget. We release code and model weights at https://github.com/Stability-AI/generative-models

Blog: https://stability.ai/news/stable-video-diffusion-open-ai-video-model

Paper: https://static1.squarespace.com/static/6213c340453c3f502425776e/t/655ce779b9d47d342a93c890/1700587395994/stable_video_diffusion.pdf

Code:https://github.com/Stability-AI/generative-models

Waitlist: https://stability.ai/contact

Model: https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/tree/main

top 7 comments
sorted by: hot top controversial new old
[–] konalt 29 points 1 year ago (1 children)
[–] Scew 12 points 1 year ago (2 children)

damn, good looking out. Won't be using this ~~for awhile~~ til I sell a couple of kidneys.

[–] harry_balzac 6 points 1 year ago

Sell one kidney to get a 3d printer then print up more kidneys! It's like printing money! (Please don't sell your kidneys fr.)

[–] fhein 2 points 1 year ago (1 children)

You can buy 2x second hand RTX3090 for about the same price of one new RTX4090, though you'll probably need to get a new PSU as well. Or rent the hardware through runpod.io or similar for around $1/hour. Still a lot of money for most people but it's not completely unachievable.. Spend some time in the local LLM community and 48GB VRAM will start to feel like the bare minimum if you want to use any of the better models :S

[–] Scew 2 points 1 year ago* (last edited 1 year ago) (1 children)

I'm just a lowly image generation hobbyist able to run some decent models on my 2060 super. lol. I had the highest tier of collab for awhile which was nice, but didn't feel like learning how to create jupityr notebooks so was at the mercy of people keeping their dependencies up-to-date and would more often sit down to a broken notebook than anything else. My whole rig is probably achievable for less than the price of 1 3090 q.q

Edit: took 5 seconds to do a search and I was low-balling my rig. Haven't looked at prices in awhile.

[–] fhein 2 points 1 year ago

Definitely not cheap, but at least not as bad as having to buy an A100 for €7000 to get 40GB VRAM. I'm hoping second hand GPU prices will plummet after Christmas

[–] [email protected] 4 points 1 year ago

Here is an alternative Piped link(s):

https://piped.video/G7mihAy691g

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.