ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.04332
21
0

IMPersona: Evaluating Individual Level LM Impersonation

6 April 2025
Quan Shi
Carlos E. Jimenez
Stephen Dong
Brian Seo
Caden Yao
Adam Kelch
Karthik Narasimhan
ArXivPDFHTML
Abstract

As language models achieve increasingly human-like capabilities in conversational text generation, a critical question emerges: to what extent can these systems simulate the characteristics of specific individuals? To evaluate this, we introduce IMPersona, a framework for evaluating LMs at impersonating specific individuals' writing style and personal knowledge. Using supervised fine-tuning and a hierarchical memory-inspired retrieval system, we demonstrate that even modestly sized open-source models, such as Llama-3.1-8B-Instruct, can achieve impersonation abilities at concerning levels. In blind conversation experiments, participants (mis)identified our fine-tuned models with memory integration as human in 44.44% of interactions, compared to just 25.00% for the best prompting-based approach. We analyze these results to propose detection methods and defense strategies against such impersonation attempts. Our findings raise important questions about both the potential applications and risks of personalized language models, particularly regarding privacy, security, and the ethical deployment of such technologies in real-world contexts.

View on arXiv
@article{shi2025_2504.04332,
  title={ IMPersona: Evaluating Individual Level LM Impersonation },
  author={ Quan Shi and Carlos E. Jimenez and Stephen Dong and Brian Seo and Caden Yao and Adam Kelch and Karthik Narasimhan },
  journal={arXiv preprint arXiv:2504.04332},
  year={ 2025 }
}
Comments on this paper