10
0

Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor

Main:10 Pages
1 Figures
Bibliography:10 Pages
1 Tables
Abstract

In AI research and practice, rigor remains largely understood in terms of methodological rigor -- such as whether mathematical, statistical, or computational methods are correctly applied. We argue that this narrow conception of rigor has contributed to the concerns raised by the responsible AI community, including overblown claims about AI capabilities. Our position is that a broader conception of what rigorous AI research and practice should entail is needed. We believe such a conception -- in addition to a more expansive understanding of (1) methodological rigor -- should include aspects related to (2) what background knowledge informs what to work on (epistemic rigor); (3) how disciplinary, community, or personal norms, standards, or beliefs influence the work (normative rigor); (4) how clearly articulated the theoretical constructs under use are (conceptual rigor); (5) what is reported and how (reporting rigor); and (6) how well-supported the inferences from existing evidence are (interpretative rigor). In doing so, we also aim to provide useful language and a framework for much-needed dialogue about the AI community's work by researchers, policymakers, journalists, and other stakeholders.

View on arXiv
@article{olteanu2025_2506.14652,
  title={ Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor },
  author={ Alexandra Olteanu and Su Lin Blodgett and Agathe Balayn and Angelina Wang and Fernando Diaz and Flavio du Pin Calmon and Margaret Mitchell and Michael Ekstrand and Reuben Binns and Solon Barocas },
  journal={arXiv preprint arXiv:2506.14652},
  year={ 2025 }
}
Comments on this paper