12
1

Measurement to Meaning: A Validity-Centered Framework for AI Evaluation

Abstract

While the capabilities and utility of AI systems have advanced, rigorous norms for evaluating these systems have lagged. Grand claims, such as models achieving general reasoning capabilities, are supported with model performance on narrow benchmarks, like performance on graduate-level exam questions, which provide a limited and potentially misleading assessment. We provide a structured approach for reasoning about the types of evaluative claims that can be made given the available evidence. For instance, our framework helps determine whether performance on a mathematical benchmark is an indication of the ability to solve problems on math tests or instead indicates a broader ability to reason. Our framework is well-suited for the contemporary paradigm in machine learning, where various stakeholders provide measurements and evaluations that downstream users use to validate their claims and decisions. At the same time, our framework also informs the construction of evaluations designed to speak to the validity of the relevant claims. By leveraging psychometrics' breakdown of validity, evaluations can prioritize the most critical facets for a given claim, improving empirical utility and decision-making efficacy. We illustrate our framework through detailed case studies of vision and language model evaluations, highlighting how explicitly considering validity strengthens the connection between evaluation evidence and the claims being made.

View on arXiv
@article{salaudeen2025_2505.10573,
  title={ Measurement to Meaning: A Validity-Centered Framework for AI Evaluation },
  author={ Olawale Salaudeen and Anka Reuel and Ahmed Ahmed and Suhana Bedi and Zachary Robertson and Sudharsan Sundar and Ben Domingue and Angelina Wang and Sanmi Koyejo },
  journal={arXiv preprint arXiv:2505.10573},
  year={ 2025 }
}
Comments on this paper