Update README.md
Browse files
README.md
CHANGED
|
@@ -13,3 +13,269 @@ pipeline_tags:
|
|
| 13 |
library_name: diffusers
|
| 14 |
pipeline_tag: text-to-video
|
| 15 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
library_name: diffusers
|
| 14 |
pipeline_tag: text-to-video
|
| 15 |
---
|
| 16 |
+
|
| 17 |
+
# π¬ Hy1.5-Distill-Models
|
| 18 |
+
|
| 19 |
+
<img src="https://raw.githubusercontent.com/ModelTC/LightX2V/main/assets/img_lightx2v.png" width="75%" />
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
π€ [HuggingFace](https://huggingface.co/lightx2v/Hy1.5-Distill-Models) | [GitHub](https://github.com/ModelTC/LightX2V) | [License](https://opensource.org/licenses/Apache-2.0)
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
This repository contains 4-step distilled models for HunyuanVideo-1.5 optimized for use with LightX2V. These distilled models enable **ultra-fast 4-step inference** without CFG (Classifier-Free Guidance), significantly reducing generation time while maintaining high-quality video output.
|
| 28 |
+
|
| 29 |
+
## π Model List
|
| 30 |
+
|
| 31 |
+
### 4-Step Distilled Models
|
| 32 |
+
|
| 33 |
+
* **`hy1.5_t2v_480p_lightx2v_4step.safetensors`** - 480p Text-to-Video 4-step distilled model (16.7 GB)
|
| 34 |
+
* **`hy1.5_t2v_480p_scaled_fp8_e4m3_lightx2v_4step.safetensors`** - 480p Text-to-Video 4-step distilled model with FP8 quantization (8.85 GB)
|
| 35 |
+
|
| 36 |
+
## π Quick Start
|
| 37 |
+
|
| 38 |
+
### Installation
|
| 39 |
+
|
| 40 |
+
First, install LightX2V:
|
| 41 |
+
|
| 42 |
+
```bash
|
| 43 |
+
pip install -v git+https://github.com/ModelTC/LightX2V.git
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
Or build from source:
|
| 47 |
+
|
| 48 |
+
```bash
|
| 49 |
+
git clone https://github.com/ModelTC/LightX2V.git
|
| 50 |
+
cd LightX2V
|
| 51 |
+
pip install -v -e .
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
### Download Models
|
| 55 |
+
|
| 56 |
+
Download the distilled models from this repository:
|
| 57 |
+
|
| 58 |
+
```bash
|
| 59 |
+
# Using git-lfs
|
| 60 |
+
git lfs install
|
| 61 |
+
git clone https://huggingface.co/lightx2v/Hy1.5-Distill-Models
|
| 62 |
+
|
| 63 |
+
# Or download individual files using huggingface-hub
|
| 64 |
+
pip install huggingface-hub
|
| 65 |
+
python -c "from huggingface_hub import hf_hub_download; hf_hub_download(repo_id='lightx2v/Hy1.5-Distill-Models', filename='hy1.5_t2v_480p_lightx2v_4step.safetensors', local_dir='./models')"
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
## π» Usage in LightX2V
|
| 69 |
+
|
| 70 |
+
### 4-Step Distilled Model (Base Version)
|
| 71 |
+
|
| 72 |
+
```python
|
| 73 |
+
"""
|
| 74 |
+
HunyuanVideo-1.5 text-to-video generation example.
|
| 75 |
+
This example demonstrates how to use LightX2V with HunyuanVideo-1.5 4-step distilled model for T2V generation.
|
| 76 |
+
"""
|
| 77 |
+
|
| 78 |
+
from lightx2v import LightX2VPipeline
|
| 79 |
+
|
| 80 |
+
# Initialize pipeline for HunyuanVideo-1.5
|
| 81 |
+
pipe = LightX2VPipeline(
|
| 82 |
+
model_path="/path/to/hunyuanvideo-1.5/", # Original model path
|
| 83 |
+
model_cls="hunyuan_video_1.5",
|
| 84 |
+
transformer_model_name="480p_t2v",
|
| 85 |
+
task="t2v",
|
| 86 |
+
# 4-step distilled model ckpt
|
| 87 |
+
dit_original_ckpt="/path/to/hy1.5_t2v_480p_lightx2v_4step.safetensors"
|
| 88 |
+
)
|
| 89 |
+
|
| 90 |
+
# Alternative: create generator from config JSON file
|
| 91 |
+
# pipe.create_generator(config_json="../configs/hunyuan_video_15/hunyuan_video_t2v_480p.json")
|
| 92 |
+
|
| 93 |
+
# Enable offloading to significantly reduce VRAM usage with minimal speed impact
|
| 94 |
+
# Suitable for RTX 30/40/50 consumer GPUs
|
| 95 |
+
pipe.enable_offload(
|
| 96 |
+
cpu_offload=True,
|
| 97 |
+
offload_granularity="block", # For HunyuanVideo-1.5, only "block" is supported
|
| 98 |
+
text_encoder_offload=True,
|
| 99 |
+
image_encoder_offload=False,
|
| 100 |
+
vae_offload=False,
|
| 101 |
+
)
|
| 102 |
+
|
| 103 |
+
# Optional: Use lighttae
|
| 104 |
+
# pipe.enable_lightvae(
|
| 105 |
+
# use_tae=True,
|
| 106 |
+
# tae_path="/path/to/lighttaehy1_5.safetensors",
|
| 107 |
+
# use_lightvae=False,
|
| 108 |
+
# vae_path=None,
|
| 109 |
+
# )
|
| 110 |
+
|
| 111 |
+
# Create generator with specified parameters
|
| 112 |
+
# Note: 4-step distillation requires infer_steps=4, guidance_scale=1, and denoising_step_list
|
| 113 |
+
pipe.create_generator(
|
| 114 |
+
attn_mode="sage_attn2",
|
| 115 |
+
infer_steps=4, # 4-step inference
|
| 116 |
+
num_frames=81,
|
| 117 |
+
guidance_scale=1, # No CFG needed for distilled models
|
| 118 |
+
sample_shift=9.0,
|
| 119 |
+
aspect_ratio="16:9",
|
| 120 |
+
fps=16,
|
| 121 |
+
denoising_step_list=[1000, 750, 500, 250] # Required for 4-step distillation
|
| 122 |
+
)
|
| 123 |
+
|
| 124 |
+
# Generation parameters
|
| 125 |
+
seed = 123
|
| 126 |
+
prompt = "A close-up shot captures a scene on a polished, light-colored granite kitchen counter, illuminated by soft natural light from an unseen window. Initially, the frame focuses on a tall, clear glass filled with golden, translucent apple juice standing next to a single, shiny red apple with a green leaf still attached to its stem. The camera moves horizontally to the right. As the shot progresses, a white ceramic plate smoothly enters the frame, revealing a fresh arrangement of about seven or eight more apples, a mix of vibrant reds and greens, piled neatly upon it. A shallow depth of field keeps the focus sharply on the fruit and glass, while the kitchen backsplash in the background remains softly blurred. The scene is in a realistic style."
|
| 127 |
+
negative_prompt = ""
|
| 128 |
+
save_result_path = "/path/to/save_results/output.mp4"
|
| 129 |
+
|
| 130 |
+
# Generate video
|
| 131 |
+
pipe.generate(
|
| 132 |
+
seed=seed,
|
| 133 |
+
prompt=prompt,
|
| 134 |
+
negative_prompt=negative_prompt,
|
| 135 |
+
save_result_path=save_result_path,
|
| 136 |
+
)
|
| 137 |
+
```
|
| 138 |
+
|
| 139 |
+
### 4-Step Distilled Model with FP8 Quantization
|
| 140 |
+
|
| 141 |
+
For even lower memory usage, use the FP8 quantized version:
|
| 142 |
+
|
| 143 |
+
```python
|
| 144 |
+
from lightx2v import LightX2VPipeline
|
| 145 |
+
|
| 146 |
+
# Initialize pipeline
|
| 147 |
+
pipe = LightX2VPipeline(
|
| 148 |
+
model_path="/path/to/hunyuanvideo-1.5/", # Original model path
|
| 149 |
+
model_cls="hunyuan_video_1.5",
|
| 150 |
+
transformer_model_name="480p_t2v",
|
| 151 |
+
task="t2v",
|
| 152 |
+
# 4-step distilled model ckpt
|
| 153 |
+
dit_original_ckpt="/path/to/hy1.5_t2v_480p_lightx2v_4step.safetensors"
|
| 154 |
+
)
|
| 155 |
+
|
| 156 |
+
# Enable FP8 quantization for the distilled model
|
| 157 |
+
pipe.enable_quantize(
|
| 158 |
+
quant_scheme='fp8-sgl',
|
| 159 |
+
dit_quantized=True,
|
| 160 |
+
dit_quantized_ckpt="/path/to/hy1.5_t2v_480p_scaled_fp8_e4m3_lightx2v_4step.safetensors",
|
| 161 |
+
text_encoder_quantized=False, # Optional: can also quantize text encoder
|
| 162 |
+
text_encoder_quantized_ckpt="/path/to/hy15_qwen25vl_llm_encoder_fp8_e4m3_lightx2v.safetensors", # Optional
|
| 163 |
+
image_encoder_quantized=False,
|
| 164 |
+
)
|
| 165 |
+
|
| 166 |
+
# Enable offloading for lower VRAM usage
|
| 167 |
+
pipe.enable_offload(
|
| 168 |
+
cpu_offload=True,
|
| 169 |
+
offload_granularity="block",
|
| 170 |
+
text_encoder_offload=True,
|
| 171 |
+
image_encoder_offload=False,
|
| 172 |
+
vae_offload=False,
|
| 173 |
+
)
|
| 174 |
+
|
| 175 |
+
# Create generator
|
| 176 |
+
pipe.create_generator(
|
| 177 |
+
attn_mode="sage_attn2",
|
| 178 |
+
infer_steps=4,
|
| 179 |
+
num_frames=81,
|
| 180 |
+
guidance_scale=1,
|
| 181 |
+
sample_shift=9.0,
|
| 182 |
+
aspect_ratio="16:9",
|
| 183 |
+
fps=16,
|
| 184 |
+
denoising_step_list=[1000, 750, 500, 250]
|
| 185 |
+
)
|
| 186 |
+
|
| 187 |
+
# Generate video
|
| 188 |
+
pipe.generate(
|
| 189 |
+
seed=123,
|
| 190 |
+
prompt="Your prompt here",
|
| 191 |
+
negative_prompt="",
|
| 192 |
+
save_result_path="/path/to/output.mp4",
|
| 193 |
+
)
|
| 194 |
+
```
|
| 195 |
+
|
| 196 |
+
## βοΈ Key Features
|
| 197 |
+
|
| 198 |
+
### 4-Step Distillation
|
| 199 |
+
|
| 200 |
+
These models use **step distillation** technology to compress the original 50-step inference process into just **4 steps**, providing:
|
| 201 |
+
|
| 202 |
+
* **π Ultra-Fast Inference**: Generate videos in a fraction of the time
|
| 203 |
+
* **π‘ No CFG Required**: Set `guidance_scale=1` (no classifier-free guidance needed)
|
| 204 |
+
* **π Quality Preservation**: Maintains high visual quality despite fewer steps
|
| 205 |
+
* **πΎ Lower Memory**: Reduced computational requirements
|
| 206 |
+
|
| 207 |
+
### FP8 Quantization (Optional)
|
| 208 |
+
|
| 209 |
+
The FP8 quantized version (`hy1.5_t2v_480p_scaled_fp8_e4m3_lightx2v_4step.safetensors`) provides additional benefits:
|
| 210 |
+
|
| 211 |
+
* **50% Memory Reduction**: Further reduces VRAM usage
|
| 212 |
+
* **Faster Computation**: Optimized quantized kernels
|
| 213 |
+
* **Maintained Quality**: FP8 quantization preserves visual quality
|
| 214 |
+
|
| 215 |
+
### Requirements
|
| 216 |
+
|
| 217 |
+
For FP8 quantized models, you need to install the SGL kernel:
|
| 218 |
+
|
| 219 |
+
```bash
|
| 220 |
+
# Requires torch == 2.8.0
|
| 221 |
+
pip install sgl-kernel --upgrade
|
| 222 |
+
```
|
| 223 |
+
|
| 224 |
+
Alternatively, you can use VLLM kernels:
|
| 225 |
+
|
| 226 |
+
```bash
|
| 227 |
+
pip install vllm
|
| 228 |
+
```
|
| 229 |
+
|
| 230 |
+
## π Performance Benefits
|
| 231 |
+
|
| 232 |
+
Using 4-step distilled models provides:
|
| 233 |
+
|
| 234 |
+
* **~25x Speedup**: Compared to standard 50-step inference
|
| 235 |
+
* **Lower VRAM Requirements**: Enables running on GPUs with less memory
|
| 236 |
+
* **No CFG Overhead**: Eliminates the need for classifier-free guidance computation
|
| 237 |
+
* **Production Ready**: Fast enough for real-time or near-real-time applications
|
| 238 |
+
|
| 239 |
+
## π Related Resources
|
| 240 |
+
|
| 241 |
+
* [LightX2V GitHub Repository](https://github.com/ModelTC/LightX2V)
|
| 242 |
+
* [LightX2V Documentation](https://lightx2v-en.readthedocs.io/en/latest/)
|
| 243 |
+
* [HunyuanVideo-1.5 Original Model](https://huggingface.co/tencent/HunyuanVideo-1.5)
|
| 244 |
+
* [Hy1.5-Quantized-Models](https://huggingface.co/lightx2v/Hy1.5-Quantized-Models) - For quantized inference without distillation
|
| 245 |
+
* [LightX2V Examples](https://github.com/ModelTC/LightX2V/tree/main/examples)
|
| 246 |
+
* [Step Distillation Documentation](https://lightx2v-en.readthedocs.io/en/latest/method_tutorials/step_distill.html)
|
| 247 |
+
|
| 248 |
+
## π Important Notes
|
| 249 |
+
|
| 250 |
+
* **Critical Configuration**:
|
| 251 |
+
- Must set `infer_steps=4` (not the default 50)
|
| 252 |
+
- Must set `guidance_scale=1` (CFG is not used in distilled models)
|
| 253 |
+
- Must provide `denoising_step_list=[1000, 750, 500, 250]`
|
| 254 |
+
|
| 255 |
+
* **Model Loading**: All advanced configurations (including `enable_quantize()` and `enable_offload()`) must be called **before** `create_generator()`, otherwise they will not take effect.
|
| 256 |
+
|
| 257 |
+
* **Original Model Required**: The original HunyuanVideo-1.5 model weights are still required. The distilled model is used in conjunction with the original model structure.
|
| 258 |
+
|
| 259 |
+
* **Attention Mode**: For best performance, we recommend using SageAttention 2 (`sage_attn2`) as the attention mode.
|
| 260 |
+
|
| 261 |
+
* **Resolution**: Currently supports 480p resolution. Higher resolutions may be available in future releases.
|
| 262 |
+
|
| 263 |
+
## π€ Citation
|
| 264 |
+
|
| 265 |
+
If you use these distilled models in your research, please cite:
|
| 266 |
+
|
| 267 |
+
```bibtex
|
| 268 |
+
@misc{lightx2v,
|
| 269 |
+
author = {LightX2V Contributors},
|
| 270 |
+
title = {LightX2V: Light Video Generation Inference Framework},
|
| 271 |
+
year = {2025},
|
| 272 |
+
publisher = {GitHub},
|
| 273 |
+
journal = {GitHub repository},
|
| 274 |
+
howpublished = {\url{https://github.com/ModelTC/lightx2v}},
|
| 275 |
+
}
|
| 276 |
+
```
|
| 277 |
+
|
| 278 |
+
## π License
|
| 279 |
+
|
| 280 |
+
This model is released under the Apache 2.0 License, same as the original HunyuanVideo-1.5 model.
|
| 281 |
+
|