46
0

Enhancing Training Data Attribution with Representational Optimization

Main:9 Pages
8 Figures
Bibliography:5 Pages
7 Tables
Appendix:16 Pages
Abstract

Training data attribution (TDA) methods aim to measure how training data impacts a model's predictions. While gradient-based attribution methods, such as influence functions, offer theoretical grounding, their computational costs make them impractical for large-scale applications. Representation-based approaches are far more scalable, but typically rely on heuristic embeddings that are not optimized for attribution, limiting their fidelity. To address these challenges, we propose AirRep, a scalable, representation-based approach that closes this gap by learning task-specific and model-aligned representations optimized explicitly for TDA. AirRep introduces two key innovations: a trainable encoder tuned for attribution quality, and an attention-based pooling mechanism that enables accurate estimation of group-wise influence. We train AirRep using a ranking objective over automatically constructed training subsets labeled by their empirical effect on target predictions. Experiments on instruction-tuned LLMs demonstrate that AirRep achieves performance on par with state-of-the-art gradient-based approaches while being nearly two orders of magnitude more efficient at inference time. Further analysis highlights its robustness and generalization across tasks and models. Our code is available atthis https URL.

View on arXiv
@article{sun2025_2505.18513,
  title={ Enhancing Training Data Attribution with Representational Optimization },
  author={ Weiwei Sun and Haokun Liu and Nikhil Kandpal and Colin Raffel and Yiming Yang },
  journal={arXiv preprint arXiv:2505.18513},
  year={ 2025 }
}
Comments on this paper