ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.11857
19
0

Post Persona Alignment for Multi-Session Dialogue Generation

13 June 2025
Yi-Pei Chen
Noriki Nishida
Hideki Nakayama
Yuji Matsumoto
ArXiv (abs)PDFHTML
Main:3 Pages
3 Figures
Bibliography:3 Pages
4 Tables
Appendix:2 Pages
Abstract

Multi-session persona-based dialogue generation presents challenges in maintaining long-term consistency and generating diverse, personalized responses. While large language models (LLMs) excel in single-session dialogues, they struggle to preserve persona fidelity and conversational coherence across extended interactions. Existing methods typically retrieve persona information before response generation, which can constrain diversity and result in generic outputs. We propose Post Persona Alignment (PPA), a novel two-stage framework that reverses this process. PPA first generates a general response based solely on dialogue context, then retrieves relevant persona memories using the response as a query, and finally refines the response to align with the speaker's persona. This post-hoc alignment strategy promotes naturalness and diversity while preserving consistency and personalization. Experiments on multi-session LLM-generated dialogue data demonstrate that PPA significantly outperforms prior approaches in consistency, diversity, and persona relevance, offering a more flexible and effective paradigm for long-term personalized dialogue generation.

View on arXiv
@article{chen2025_2506.11857,
  title={ Post Persona Alignment for Multi-Session Dialogue Generation },
  author={ Yi-Pei Chen and Noriki Nishida and Hideki Nakayama and Yuji Matsumoto },
  journal={arXiv preprint arXiv:2506.11857},
  year={ 2025 }
}
Comments on this paper