SG-I2V: Self-Guided Trajectory Control in Image-to-Video Generation
Abstract
Methods for image-to-video generation have achieved impressive, photo-realistic quality. However, adjusting specific elements in generated videos, such as object motion or camera movement, is often a tedious process of trial and error, e.g., involving re-generating videos with different random seeds. Recent techniques address this issue by fine-tuning a pre-trained model to follow conditioning signals, such as bounding boxes or point trajectories. Yet, this fine-tuning procedure can be computationally expensive, and it requires datasets with annotated object motion, which can be difficult to procure. In this work, we introduce SG-I2V, a framework for controllable image-to-video generation that is self-guidedx2013offering zero-shot control by relying solely on the knowledge present in a pre-trained image-to-video diffusion model without the need for fine-tuning or external knowledge. Our zero-shot method outperforms unsupervised baselines while being competitive with supervised models in terms of visual quality and motion fidelity.
Community
We achieve zero-shot trajectory control in image-to-video generation by leveraging the knowledge present in a pre-trained image-to-video diffusion model. Our method is self-guided, offering zero-shot motion control without fine-tuning or relying on external knowledge.
Project page: https://kmcode1.github.io/Projects/SG-I2V/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DreamVideo-2: Zero-Shot Subject-Driven Video Customization with Precise Motion Control (2024)
- FrameBridge: Improving Image-to-Video Generation with Bridge Models (2024)
- Cavia: Camera-controllable Multi-view Video Diffusion with View-Integrated Attention (2024)
- ViBiDSampler: Enhancing Video Interpolation Using Bidirectional Diffusion Sampler (2024)
- Tex4D: Zero-shot 4D Scene Texturing with Video Diffusion Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper