73
0

Surfacing Semantic Orthogonality Across Model Safety Benchmarks: A Multi-Dimensional Analysis

Abstract

Various AI safety datasets have been developed to measure LLMs against evolving interpretations of harm. Our evaluation of five recently published open-source safety benchmarks reveals distinct semantic clusters using UMAP dimensionality reduction and kmeans clustering (silhouette score: 0.470). We identify six primary harm categories with varying benchmark representation. GretelAI, for example, focuses heavily on privacy concerns, while WildGuardMix emphasizes self-harm scenarios. Significant differences in prompt length distribution suggests confounds to data collection and interpretations of harm as well as offer possible context. Our analysis quantifies benchmark orthogonality among AI benchmarks, allowing for transparency in coverage gaps despite topical similarities. Our quantitative framework for analyzing semantic orthogonality across safety benchmarks enables more targeted development of datasets that comprehensively address the evolving landscape of harms in AI use, however that is defined in the future.

View on arXiv
@article{bennion2025_2505.17636,
  title={ Surfacing Semantic Orthogonality Across Model Safety Benchmarks: A Multi-Dimensional Analysis },
  author={ Jonathan Bennion and Shaona Ghosh and Mantek Singh and Nouha Dziri },
  journal={arXiv preprint arXiv:2505.17636},
  year={ 2025 }
}
Comments on this paper