ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02321
9
0

Quantifying Misattribution Unfairness in Authorship Attribution

2 June 2025
Pegah Alipoormolabashi
Ajay Patel
Niranjan Balasubramanian
ArXivPDFHTML
Abstract

Authorship misattribution can have profound consequences in real life. In forensic settings simply being considered as one of the potential authors of an evidential piece of text or communication can result in undesirable scrutiny. This raises a fairness question: Is every author in the candidate pool at equal risk of misattribution? Standard evaluation measures for authorship attribution systems do not explicitly account for this notion of fairness. We introduce a simple measure, Misattribution Unfairness Index (MAUIk), which is based on how often authors are ranked in the top k for texts they did not write. Using this measure we quantify the unfairness of five models on two different datasets. All models exhibit high levels of unfairness with increased risks for some authors. Furthermore, we find that this unfairness relates to how the models embed the authors as vectors in the latent search space. In particular, we observe that the risk of misattribution is higher for authors closer to the centroid (or center) of the embedded authors in the haystack. These results indicate the potential for harm and the need for communicating with and calibrating end users on misattribution risk when building and providing such models for downstream use.

View on arXiv
@article{alipoormolabashi2025_2506.02321,
  title={ Quantifying Misattribution Unfairness in Authorship Attribution },
  author={ Pegah Alipoormolabashi and Ajay Patel and Niranjan Balasubramanian },
  journal={arXiv preprint arXiv:2506.02321},
  year={ 2025 }
}
Comments on this paper