File size: 1,443 Bytes
9379007
 
c6ebcbc
 
 
 
 
 
 
 
 
 
 
 
 
9379007
 
 
 
 
d023103
9379007
 
 
 
 
 
 
 
2df9509
9379007
c6ebcbc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
pipeline_tag: image-to-video
license: mit
datasets:
- openai/MMMLU
language:
- am
metrics:
- accuracy
base_model:
- black-forest-labs/FLUX.1-dev
new_version: black-forest-labs/FLUX.1-dev
library_name: adapter-transformers
tags:
- chemistry
---
# AnimateLCM-I2V for Fast Image-conditioned Video Generation in 4 steps.

AnimateLCM-I2V is a latent image-to-video consistency model finetuned with [AnimateLCM](https://huggingface.co/wangfuyun/AnimateLCM) following the strategy proposed in [AnimateLCM-paper](https://arxiv.org/abs/2402.00769) without requiring teacher models.

[AnimateLCM:  Computation-Efficient Personalized Style Video Generation without Personalized Video Data](https://arxiv.org/abs/2402.00769) by Fu-Yun Wang et al.

## Example-Video

![image/png](https://cdn-uploads.huggingface.co/production/uploads/63e9e92f20c109718713f5eb/P3rcJbtTKYVnBfufZ_OVg.png)

<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63e9e92f20c109718713f5eb/SMZ4DAinSnrxKsVEW8dio.mp4"></video>


For more details, please refer to our [[paper](https://arxiv.org/abs/2402.00769)] | [[code](https://github.com/G-U-N/AnimateLCM)] | [[proj-page](https://animatelcm.github.io/)] | [[civitai](https://civitai.com/models/310920/animatelcm-i2v-fast-image-to-video-generation)].

<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63e9e92f20c109718713f5eb/KCwSoZCdxkkmtDg1LuXsP.mp4"></video>