76
0

ConsistentChat: Building Skeleton-Guided Consistent Dialogues for Large Language Models from Scratch

Main:8 Pages
20 Figures
Bibliography:2 Pages
18 Tables
Appendix:16 Pages
Abstract

Current instruction data synthesis methods primarily focus on single-turn instructions and often neglect cross-turn coherence, resulting in context drift and reduced task completion rates in extended conversations. To address this limitation, we propose Skeleton-Guided Multi-Turn Dialogue Generation, a framework that constrains multi-turn instruction synthesis by explicitly modeling human conversational intent. It operates in two stages: (1) Intent Modeling, which captures the global structure of human dialogues by assigning each conversation to one of nine well-defined intent trajectories, ensuring a coherent and goal-oriented information flow; and (2) Skeleton Generation, which constructs a structurally grounded sequence of user queries aligned with the modeled intent, thereby serving as a scaffold that constrains and guides the downstream instruction synthesis process. Based on this process, we construct ConsistentChat, a multi-turn instruction dataset with approximately 15,000 multi-turn conversations and 224,392 utterances. Experiments on the Light, Topdial, and MT-Eval benchmarks show that models fine-tuned on ConsistentChat achieve a 20-30% improvement in chat consistency and up to a 15% increase in task success rate, significantly outperforming models trained on existing single-turn and multi-turn instruction datasets.

View on arXiv
@article{chen2025_2506.03558,
  title={ ConsistentChat: Building Skeleton-Guided Consistent Dialogues for Large Language Models from Scratch },
  author={ Jiawei Chen and Xinyan Guan and Qianhao Yuan and Guozhao Mo and Weixiang Zhou and Yaojie Lu and Hongyu Lin and Ben He and Le Sun and Xianpei Han },
  journal={arXiv preprint arXiv:2506.03558},
  year={ 2025 }
}
Comments on this paper