FantasyID: Face Knowledge Enhanced ID-Preserving Video Generation
Abstract
Tuning-free approaches adapting large-scale pre-trained video diffusion models for identity-preserving text-to-video generation (IPT2V) have gained popularity recently due to their efficacy and scalability. However, significant challenges remain to achieve satisfied facial dynamics while keeping the identity unchanged. In this work, we present a novel tuning-free IPT2V framework by enhancing face knowledge of the pre-trained video model built on diffusion transformers (DiT), dubbed FantasyID. Essentially, 3D facial geometry prior is incorporated to ensure plausible facial structures during video synthesis. To prevent the model from learning copy-paste shortcuts that simply replicate reference face across frames, a multi-view face augmentation strategy is devised to capture diverse 2D facial appearance features, hence increasing the dynamics over the facial expressions and head poses. Additionally, after blending the 2D and 3D features as guidance, instead of naively employing cross-attention to inject guidance cues into DiT layers, a learnable layer-aware adaptive mechanism is employed to selectively inject the fused features into each individual DiT layers, facilitating balanced modeling of identity preservation and motion dynamics. Experimental results validate our model's superiority over the current tuning-free IPT2V methods.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Magic Mirror: ID-Preserved Video Generation in Video Diffusion Transformers (2025)
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers (2025)
- IP-FaceDiff: Identity-Preserving Facial Video Editing with Diffusion (2025)
- Ouroboros-Diffusion: Exploring Consistent Content Generation in Tuning-free Long Video Diffusion (2025)
- Face-MakeUp: Multimodal Facial Prompts for Text-to-Image Generation (2025)
- Towards Consistent and Controllable Image Synthesis for Face Editing (2025)
- EchoVideo: Identity-Preserving Human Video Generation by Multimodal Feature Fusion (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper