From What to Respond to When to Respond: Timely Response Generation for Open-domain Dialogue Agents

While research on dialogue response generation has primarily focused on generating coherent responses conditioning on textual context, the critical question of when to respond grounded on the temporal context remains underexplored. To bridge this gap, we propose a novel task called timely dialogue response generation and introduce the TimelyChat benchmark, which evaluates the capabilities of language models to predict appropriate time intervals and generate time-conditioned responses. Additionally, we construct a large-scale training dataset by leveraging unlabeled event knowledge from a temporal commonsense knowledge graph and employing a large language model (LLM) to synthesize 55K event-driven dialogues. We then train Timer, a dialogue agent designed to proactively predict time intervals and generate timely responses that align with those intervals. Experimental results show that Timer outperforms prompting-based LLMs and other fine-tuned baselines in both turn-level and dialogue-level evaluations. We publicly release our data, model, and code.
View on arXiv@article{jang2025_2506.14285, title={ From What to Respond to When to Respond: Timely Response Generation for Open-domain Dialogue Agents }, author={ Seongbo Jang and Minjin Jeon and Jaehoon Lee and Seonghyeon Lee and Dongha Lee and Hwanjo Yu }, journal={arXiv preprint arXiv:2506.14285}, year={ 2025 } }