5
0

Social Good or Scientific Curiosity? Uncovering the Research Framing Behind NLP Artefacts

Abstract

Clarifying the research framing of NLP artefacts (e.g., models, datasets, etc.) is crucial to aligning research with practical applications. Recent studies manually analyzed NLP research across domains, showing that few papers explicitly identify key stakeholders, intended uses, or appropriate contexts. In this work, we propose to automate this analysis, developing a three-component system that infers research framings by first extracting key elements (means, ends, stakeholders), then linking them through interpretable rules and contextual reasoning. We evaluate our approach on two domains: automated fact-checking using an existing dataset, and hate speech detection for which we annotate a new dataset-achieving consistent improvements over strong LLM baselines. Finally, we apply our system to recent automated fact-checking papers and uncover three notable trends: a rise in vague or underspecified research goals, increased emphasis on scientific exploration over application, and a shift toward supporting human fact-checkers rather than pursuing full automation.

View on arXiv
@article{chamoun2025_2505.18677,
  title={ Social Good or Scientific Curiosity? Uncovering the Research Framing Behind NLP Artefacts },
  author={ Eric Chamoun and Nedjma Ousidhoum and Michael Schlichtkrull and Andreas Vlachos },
  journal={arXiv preprint arXiv:2505.18677},
  year={ 2025 }
}
Comments on this paper