EgoSpeak: Learning When to Speak for Egocentric Conversational Agents in the Wild
Abstract
Predicting when to initiate speech in real-world environments remains a fundamental challenge for conversational agents. We introduce EgoSpeak, a novel framework for real-time speech initiation prediction in egocentric streaming video. By modeling the conversation from the speaker's first-person viewpoint, EgoSpeak is tailored for human-like interactions in which a conversational agent must continuously observe its environment and dynamically decide when to talk. Our approach bridges the gap between simplified experimental setups and complex natural conversations by integrating four key capabilities: (1) first-person perspective, (2) RGB processing, (3) online processing, and (4) untrimmed video processing. We also present YT-Conversation, a diverse collection of in-the-wild conversational videos from YouTube, as a resource for large-scale pretraining. Experiments on EasyCom and Ego4D demonstrate that EgoSpeak outperforms random and silence-based baselines in real time. Our results also highlight the importance of multimodal input and context length in effectively deciding when to speak.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- JELLY: Joint Emotion Recognition and Context Reasoning with LLMs for Conversational Speech Synthesis (2025)
- EgoMe: Follow Me via Egocentric View in Real World (2025)
- Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge (2025)
- X-LeBench: A Benchmark for Extremely Long Egocentric Video Understanding (2025)
- REALTALK: A 21-Day Real-World Dataset for Long-Term Conversation (2025)
- MTPChat: A Multimodal Time-Aware Persona Dataset for Conversational Agents (2025)
- Vinci: A Real-time Embodied Smart Assistant based on Egocentric Vision-Language Model (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper