git clone https://github.com/rese1f/StableVideo.git
conda create -n stablevideo python=3.11
pip install -r requirements.txt
All models and detectors can be downloaded from ControlNet Hugging Face page. Download Link
Download the example atlas pretrained by Text2Live.
You can also train on your own video following NLA.
And it will create a folder data:
StableVideo
├── ...
├── ckpt
│ ├── cldm_v15.yaml
| ├── dpt_hybrid-midas-501f0c75.pt
│ ├── control_sd15_canny.pth
│ └── control_sd15_depth.pth
├── data
│ └── car-turn
│ ├── checkpoint # NLA models are stored here
│ ├── car-turn # contains video frames
│ ├── ...
│ ├── blackswan
│ ├── ...
└── ...
Run the following command to start. We provide some prompt template to help you achieve better result.
python app.py
the result .mp4
video and keyframe will be stored in the directory ./log
after clicking render
button.
This implementation is built partly on Text2LIVE and ControlNet.