Update README to include the diffusers integration
Browse files
README.md
CHANGED
|
@@ -238,7 +238,47 @@ In preparation.
|
|
| 238 |
|
| 239 |
Our codebase essentially supports all the commonly used components in video generation. You can manage your experiments flexibly by adding corresponding registration classes, including `ENGINE, MODEL, DATASETS, EMBEDDER, AUTO_ENCODER, DISTRIBUTION, VISUAL, DIFFUSION, PRETRAIN`, and can be compatible with all our open-source algorithms according to your own needs. If you have any questions, feel free to give us your feedback at any time.
|
| 240 |
|
|
|
|
| 241 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 242 |
|
| 243 |
## BibTeX
|
| 244 |
|
|
|
|
| 238 |
|
| 239 |
Our codebase essentially supports all the commonly used components in video generation. You can manage your experiments flexibly by adding corresponding registration classes, including `ENGINE, MODEL, DATASETS, EMBEDDER, AUTO_ENCODER, DISTRIBUTION, VISUAL, DIFFUSION, PRETRAIN`, and can be compatible with all our open-source algorithms according to your own needs. If you have any questions, feel free to give us your feedback at any time.
|
| 240 |
|
| 241 |
+
## Integration of I2VGenXL with 🧨 diffusers
|
| 242 |
|
| 243 |
+
I2VGenXL is supported in the 🧨 diffusers library. Here's how to use it:
|
| 244 |
+
|
| 245 |
+
```python
|
| 246 |
+
import torch
|
| 247 |
+
from diffusers import I2VGenXLPipeline
|
| 248 |
+
from diffusers.utils import load_image, export_to_gif
|
| 249 |
+
|
| 250 |
+
repo_id = "ali-vilab/i2vgen-xl"
|
| 251 |
+
pipeline = I2VGenXLPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, variant="fp16").to("cuda")
|
| 252 |
+
|
| 253 |
+
image_url = "https://github.com/ali-vilab/i2vgen-xl/blob/main/data/test_images/img_0009.png?download=true"
|
| 254 |
+
image = load_image(image_url).convert("RGB")
|
| 255 |
+
prompt = "Papers were floating in the air on a table in the library"
|
| 256 |
+
|
| 257 |
+
generator = torch.manual_seed(8888)
|
| 258 |
+
frames = pipeline(
|
| 259 |
+
prompt=prompt,
|
| 260 |
+
image=image,
|
| 261 |
+
generator=generator
|
| 262 |
+
).frames[0]
|
| 263 |
+
|
| 264 |
+
print(export_to_gif(frames))
|
| 265 |
+
```
|
| 266 |
+
|
| 267 |
+
Find the official documentation [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/i2vgenxl).
|
| 268 |
+
|
| 269 |
+
Sample output with I2VGenXL:
|
| 270 |
+
|
| 271 |
+
<table>
|
| 272 |
+
<tr>
|
| 273 |
+
<td><center>
|
| 274 |
+
masterpiece, bestquality, sunset.
|
| 275 |
+
<br>
|
| 276 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/i2vgen-xl-example.gif"
|
| 277 |
+
alt="library"
|
| 278 |
+
style="width: 300px;" />
|
| 279 |
+
</center></td>
|
| 280 |
+
</tr>
|
| 281 |
+
</table>
|
| 282 |
|
| 283 |
## BibTeX
|
| 284 |
|