ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.16789
44
0

Accidental Misalignment: Fine-Tuning Language Models Induces Unexpected Vulnerability

22 May 2025
Punya Syon Pandey
Samuel Simko
Kellin Pelrine
Zhijing Jin
    AAML
ArXiv (abs)PDFHTML
Main:8 Pages
6 Figures
Bibliography:2 Pages
23 Tables
Appendix:4 Pages
Abstract

As large language models gain popularity, their vulnerability to adversarial attacks remains a primary concern. While fine-tuning models on domain-specific datasets is often employed to improve model performance, it can introduce vulnerabilities within the underlying model. In this work, we investigate Accidental Misalignment, unexpected vulnerabilities arising from characteristics of fine-tuning data. We begin by identifying potential correlation factors such as linguistic features, semantic similarity, and toxicity within our experimental datasets. We then evaluate the adversarial performance of these fine-tuned models and assess how dataset factors correlate with attack success rates. Lastly, we explore potential causal links, offering new insights into adversarial defense strategies and highlighting the crucial role of dataset design in preserving model alignment. Our code is available atthis https URL.

View on arXiv
@article{pandey2025_2505.16789,
  title={ Accidental Misalignment: Fine-Tuning Language Models Induces Unexpected Vulnerability },
  author={ Punya Syon Pandey and Samuel Simko and Kellin Pelrine and Zhijing Jin },
  journal={arXiv preprint arXiv:2505.16789},
  year={ 2025 }
}
Comments on this paper