Abstract
A unified video and action model holds significant promise for robotics, where videos provide rich scene information for action prediction, and actions provide dynamics information for video prediction. However, effectively combining video generation and action prediction remains challenging, and current video generation-based methods struggle to match the performance of direct policy learning in action accuracy and inference speed. To bridge this gap, we introduce the Unified Video Action model (UVA), which jointly optimizes video and action predictions to achieve both high accuracy and efficient action inference. The key lies in learning a joint video-action latent representation and decoupling video-action decoding. The joint latent representation bridges the visual and action domains, effectively modeling the relationship between video and action sequences. Meanwhile, the decoupled decoding, powered by two lightweight diffusion heads, enables high-speed action inference by bypassing video generation during inference. Such a unified framework further enables versatile functionality through masked input training. By selectively masking actions or videos, a single model can tackle diverse tasks beyond policy learning, such as forward and inverse dynamics modeling and video generation. Via an extensive set of experiments, we demonstrate that UVA can serve as a general-purpose solution for a wide range of robotics tasks, such as policy learning, forward/inverse dynamics and video observation prediction, without compromising performance compared to methods tailored for specific applications. Results are best viewed on https://unified-video-action-model.github.io/.
Community
Video generation is powerful but too slow for real-world robotic tasks. How can we enable both video and action generation while ensuring real-time policy inference? Check out our work on the Unified Video Action Model (UVA) to find out!
Paper: https://arxiv.org/pdf/2503.00200
Website: https://unified-video-action-model.github.io/
Code: https://github.com/ShuangLI59/unified_video_action
Twitter: https://x.com/ShuangL13799063/status/1897006636067422498
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- PlaySlot: Learning Inverse Latent Dynamics for Controllable Object-Centric Video Prediction and Planning (2025)
- VILP: Imitation Learning With Latent Video Planning (2025)
- Taming Teacher Forcing for Masked Autoregressive Video Generation (2025)
- Human Motion Prediction, Reconstruction, and Generation (2025)
- Object-Centric Image to Video Generation with Language Guidance (2025)
- GEVRM: Goal-Expressive Video Generation Model For Robust Visual Manipulation (2025)
- VaViM and VaVAM: Autonomous Driving through Video Generative Modeling (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper