MoG

MoG: Motion-Aware Generative Frame Interpolation

MoG is a generative video frame interpolation (VFI) model, designed to synthesize intermediate frames between two input frames.

MoG marks the first explicit incorporation of motion guidance between input frames to enhance the motion awareness of generative models. We demonstrate that the intermediate flow derived from flow-based VFI methods can effectively serve as motion guidance, and we propose a simple yet efficient approach to integrate this prior into the network. As a result, MoG achieves significant improvements over existing open-source generative VFI methods, excelling in both real-world and animated scenarios.

Source code is available at https://github.com/MCG-NJU/MoG-VFI.

Network Arichitecture

pipeline_figure

Model Description

Usage

We provide two model checkpoints: real.ckpt for real-world scenes and ani.ckpt for animation scenes. For detailed instructions on loading the checkpoints and performing inference, please refer to our official repository.

Citation

If you find our code useful or our work relevant, please consider citing:

@article{zhang2025motion,
  title={Motion-Aware Generative Frame Interpolation},
  author={Zhang, Guozhen and Zhu, Yuhan and Cui, Yutao and Zhao, Xiaotong and Ma, Kai and Wang, Limin},
  journal={arXiv preprint arXiv:2501.03699},
  year={2025}
}
Downloads last month
0
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.