Skip to content

Commit

Permalink
Merge pull request #654 from THUDM/CogVideoX_dev
Browse files Browse the repository at this point in the history
Support SFT using ZeRO
  • Loading branch information
zRzRzRzRzRzRzR authored Jan 20, 2025
2 parents f66f164 + bf73742 commit c1ca70b
Show file tree
Hide file tree
Showing 34 changed files with 1,631 additions and 273 deletions.
120 changes: 82 additions & 38 deletions finetune/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,22 +4,34 @@

[日本語で読む](./README_ja.md)

If you're looking for the fine-tuning instructions for the SAT version, please check [here](../sat/README_zh.md). The dataset format for this version differs from the one used here.
If you're looking for the fine-tuning instructions for the SAT version, please check [here](../sat/README_zh.md). The
dataset format for this version differs from the one used here.

## Hardware Requirements

| Model | Training Type | Mixed Precision | Training Resolution (frames x height x width) | Hardware Requirements |
|---------------------|-----------------|----------------|---------------------------------------------|------------------------|
| cogvideox-t2v-2b | lora (rank128) | fp16 | 49x480x720 | 16GB VRAM (NVIDIA 4080) |
| cogvideox-t2v-5b | lora (rank128) | bf16 | 49x480x720 | 24GB VRAM (NVIDIA 4090) |
| cogvideox-i2v-5b | lora (rank128) | bf16 | 49x480x720 | 24GB VRAM (NVIDIA 4090) |
| cogvideox1.5-t2v-5b | lora (rank128) | bf16 | 81x768x1360 | 35GB VRAM (NVIDIA A100) |
| cogvideox1.5-i2v-5b | lora (rank128) | bf16 | 81x768x1360 | 35GB VRAM (NVIDIA A100) |

| Model | Training Type | Distribution Strategy | Mixed Precision | Training Resolution (FxHxW) | Hardware Requirements |
|----------------------------|----------------|--------------------------------------|-----------------|-----------------------------|-------------------------|
| cogvideox-t2v-2b | lora (rank128) | DDP | fp16 | 49x480x720 | 16GB VRAM (NVIDIA 4080) |
| cogvideox-{t2v, i2v}-5b | lora (rank128) | DDP | bf16 | 49x480x720 | 24GB VRAM (NVIDIA 4090) |
| cogvideox1.5-{t2v, i2v}-5b | lora (rank128) | DDP | bf16 | 81x768x1360 | 35GB VRAM (NVIDIA A100) |
| cogvideox-t2v-2b | sft | DDP | fp16 | 49x480x720 | 36GB VRAM (NVIDIA A100) |
| cogvideox-t2v-2b | sft | 1-GPU zero-2 + opt offload | fp16 | 49x480x720 | 17GB VRAM (NVIDIA 4090) |
| cogvideox-t2v-2b | sft | 8-GPU zero-2 | fp16 | 49x480x720 | 17GB VRAM (NVIDIA 4090) |
| cogvideox-t2v-2b | sft | 8-GPU zero-3 | fp16 | 49x480x720 | 19GB VRAM (NVIDIA 4090) |
| cogvideox-t2v-2b | sft | 8-GPU zero-3 + opt and param offload | bf16 | 49x480x720 | 14GB VRAM (NVIDIA 4080) |
| cogvideox-{t2v, i2v}-5b | sft | 1-GPU zero-2 + opt offload | bf16 | 49x480x720 | 42GB VRAM (NVIDIA A100) |
| cogvideox-{t2v, i2v}-5b | sft | 8-GPU zero-2 | bf16 | 49x480x720 | 42GB VRAM (NVIDIA 4090) |
| cogvideox-{t2v, i2v}-5b | sft | 8-GPU zero-3 | bf16 | 49x480x720 | 43GB VRAM (NVIDIA 4090) |
| cogvideox-{t2v, i2v}-5b | sft | 8-GPU zero-3 + opt and param offload | bf16 | 49x480x720 | 28GB VRAM (NVIDIA 5090) |
| cogvideox1.5-{t2v, i2v}-5b | sft | 1-GPU zero-2 + opt offload | bf16 | 81x768x1360 | 56GB VRAM (NVIDIA A100) |
| cogvideox1.5-{t2v, i2v}-5b | sft | 8-GPU zero-2 | bf16 | 81x768x1360 | 55GB VRAM (NVIDIA A100) |
| cogvideox1.5-{t2v, i2v}-5b | sft | 8-GPU zero-3 | bf16 | 81x768x1360 | 55GB VRAM (NVIDIA A100) |
| cogvideox1.5-{t2v, i2v}-5b | sft | 8-GPU zero-3 + opt and param offload | bf16 | 81x768x1360 | 40GB VRAM (NVIDIA A100) |

## Install Dependencies

Since the relevant code has not yet been merged into the official `diffusers` release, you need to fine-tune based on the diffusers branch. Follow the steps below to install the dependencies:
Since the relevant code has not yet been merged into the official `diffusers` release, you need to fine-tune based on
the diffusers branch. Follow the steps below to install the dependencies:

```shell
git clone https://github.com/huggingface/diffusers.git
Expand All @@ -29,7 +41,8 @@ pip install -e .

## Prepare the Dataset

First, you need to prepare your dataset. Depending on your task type (T2V or I2V), the dataset format will vary slightly:
First, you need to prepare your dataset. Depending on your task type (T2V or I2V), the dataset format will vary
slightly:

```
.
Expand All @@ -41,62 +54,93 @@ First, you need to prepare your dataset. Depending on your task type (T2V or I2V
```

Where:

- `prompts.txt`: Contains the prompts
- `videos/`: Contains the .mp4 video files
- `videos.txt`: Contains the list of video files in the `videos/` directory
- `images/`: (Optional) Contains the .png reference image files
- `images.txt`: (Optional) Contains the list of reference image files

You can download a sample dataset (T2V) [Disney Steamboat Willie](https://huggingface.co/datasets/Wild-Heart/Disney-VideoGeneration-Dataset).
You can download a sample dataset (
T2V) [Disney Steamboat Willie](https://huggingface.co/datasets/Wild-Heart/Disney-VideoGeneration-Dataset).

If you need to use a validation dataset during training, make sure to provide a validation dataset with the same format as the training dataset.
If you need to use a validation dataset during training, make sure to provide a validation dataset with the same format
as the training dataset.

## Run the Script to Start Fine-tuning
## Running Scripts to Start Fine-tuning

Before starting the training, please note the following resolution requirements:
Before starting training, please note the following resolution requirements:

1. The number of frames must be a multiple of 8 **plus 1** (i.e., 8N+1), such as 49, 81, etc.
2. The recommended resolution for videos is:
- CogVideoX: 480x720 (Height x Width)
- CogVideoX1.5: 768x1360 (Height x Width)
3. For samples that do not meet the required resolution (videos or images), the code will automatically resize them. This may distort the aspect ratio and impact training results. We recommend preprocessing the samples (e.g., using crop + resize to maintain aspect ratio) before training.
1. The number of frames must be a multiple of 8 **plus 1** (i.e., 8N+1), such as 49, 81 ...
2. Recommended video resolutions for each model:
- CogVideoX: 480x720 (height x width)
- CogVideoX1.5: 768x1360 (height x width)
3. For samples (videos or images) that don't match the training resolution, the code will directly resize them. This may
cause aspect ratio distortion and affect training results. It's recommended to preprocess your samples (e.g., using
crop + resize to maintain aspect ratio) before training.

> **Important Note**: To improve training efficiency, we will automatically encode videos and cache the results on disk. If you modify the data after training has begun, please delete the `latent` directory under the `videos/` folder to ensure that the latest data is used.
> **Important Note**: To improve training efficiency, we automatically encode videos and cache the results on disk
> before training. If you modify the data after training, please delete the latent directory under the video directory to
> ensure the latest data is used.
### Text-to-Video (T2V) Fine-tuning
### LoRA

```bash
# Modify the configuration parameters in accelerate_train_t2v.sh
# The main parameters to modify are:
# Modify configuration parameters in train_ddp_t2v.sh
# Main parameters to modify:
# --output_dir: Output directory
# --data_root: Root directory of the dataset
# --caption_column: Path to the prompt file
# --video_column: Path to the video list file
# --data_root: Dataset root directory
# --caption_column: Path to prompt file
# --image_column: Optional for I2V, path to reference image file list (remove this parameter to use the first frame of video as image condition)
# --video_column: Path to video file list
# --train_resolution: Training resolution (frames x height x width)
# Refer to the start script for other important parameters
# For other important parameters, please refer to the launch script

bash accelerate_train_t2v.sh
bash train_ddp_t2v.sh # Text-to-Video (T2V) fine-tuning
bash train_ddp_i2v.sh # Image-to-Video (I2V) fine-tuning
```

### Image-to-Video (I2V) Fine-tuning
### SFT

We provide several zero configuration templates in the `configs/` directory. Please choose the appropriate training
configuration based on your needs (configure the `deepspeed_config_file` option in `accelerate_config.yaml`).

```bash
# Modify the configuration parameters in accelerate_train_i2v.sh
# In addition to modifying the same parameters as for T2V, you also need to set:
# --image_column: Path to the reference image list file(if not provided, remove use this parameter)
# Refer to the start script for other important parameters
# Parameters to configure are the same as LoRA training

bash accelerate_train_i2v.sh
bash train_zero_t2v.sh # Text-to-Video (T2V) fine-tuning
bash train_zero_i2v.sh # Image-to-Video (I2V) fine-tuning
```

In addition to setting the bash script parameters, you need to set the relevant training options in the zero
configuration file and ensure the zero training configuration matches the parameters in the bash script, such as
batch_size, gradient_accumulation_steps, mixed_precision. For details, please refer to
the [DeepSpeed official documentation](https://www.deepspeed.ai/docs/config-json/)

When using SFT training, please note:

1. For SFT training, model offload is not used during validation, so the peak VRAM usage may exceed 24GB. For GPUs with
less than 24GB VRAM, it's recommended to disable validation.

2. Validation is slow when zero-3 is enabled, so it's recommended to disable validation when using zero-3.

## Load the Fine-tuned Model

+ Please refer to [cli_demo.py](../inference/cli_demo.py) for instructions on how to load the fine-tuned model.

+ For SFT trained models, please first use the `zero_to_fp32.py` script in the `checkpoint-*/` directory to merge the
model weights

## Best Practices

+ We included 70 training videos with a resolution of `200 x 480 x 720` (frames x height x width). Through frame skipping in the data preprocessing, we created two smaller datasets with 49 and 16 frames to speed up experiments. The maximum frame count recommended by the CogVideoX team is 49 frames. These 70 videos were divided into three groups: 10, 25, and 50 videos, with similar conceptual nature.
+ We included 70 training videos with a resolution of `200 x 480 x 720` (frames x height x width). Through frame
skipping in the data preprocessing, we created two smaller datasets with 49 and 16 frames to speed up experiments. The
maximum frame count recommended by the CogVideoX team is 49 frames. These 70 videos were divided into three groups:
10, 25, and 50 videos, with similar conceptual nature.
+ Videos with 25 or more frames work best for training new concepts and styles.
+ It's recommended to use an identifier token, which can be specified using `--id_token`, for better training results. This is similar to Dreambooth training, though regular fine-tuning without using this token will still work.
+ The original repository uses `lora_alpha` set to 1. We found that this value performed poorly in several runs, possibly due to differences in the model backend and training settings. Our recommendation is to set `lora_alpha` to be equal to the rank or `rank // 2`.
+ It's recommended to use an identifier token, which can be specified using `--id_token`, for better training results.
This is similar to Dreambooth training, though regular fine-tuning without using this token will still work.
+ The original repository uses `lora_alpha` set to 1. We found that this value performed poorly in several runs,
possibly due to differences in the model backend and training settings. Our recommendation is to set `lora_alpha` to
be equal to the rank or `rank // 2`.
+ It's advised to use a rank of 64 or higher.
Loading

0 comments on commit c1ca70b

Please sign in to comment.