48
0

Proactive Assistant Dialogue Generation from Streaming Egocentric Videos

Main:3 Pages
11 Figures
11 Tables
Appendix:22 Pages
Abstract

Recent advances in conversational AI have been substantial, but developing real-time systems for perceptual task guidance remains challenging. These systems must provide interactive, proactive assistance based on streaming visual inputs, yet their development is constrained by the costly and labor-intensive process of data collection and system evaluation. To address these limitations, we present a comprehensive framework with three key contributions. First, we introduce a novel data curation pipeline that synthesizes dialogues from annotated egocentric videos, resulting in \dataset, a large-scale synthetic dialogue dataset spanning multiple domains. Second, we develop a suite of automatic evaluation metrics, validated through extensive human studies. Third, we propose an end-to-end model that processes streaming video inputs to generate contextually appropriate responses, incorporating novel techniques for handling data imbalance and long-duration videos. This work lays the foundation for developing real-time, proactive AI assistants capable of guiding users through diverse tasks. Project page:this https URL

View on arXiv
@article{zhang2025_2506.05904,
  title={ Proactive Assistant Dialogue Generation from Streaming Egocentric Videos },
  author={ Yichi Zhang and Xin Luna Dong and Zhaojiang Lin and Andrea Madotto and Anuj Kumar and Babak Damavandi and Joyce Chai and Seungwhan Moon },
  journal={arXiv preprint arXiv:2506.05904},
  year={ 2025 }
}
Comments on this paper