9
0

MObyGaze: a film dataset of multimodal objectification densely annotated by experts

Abstract

Characterizing and quantifying gender representation disparities in audiovisual storytelling contents is necessary to grasp how stereotypes may perpetuate on screen. In this article, we consider the high-level construct of objectification and introduce a new AI task to the ML community: characterize and quantify complex multimodal (visual, speech, audio) temporal patterns producing objectification in films. Building on film studies and psychology, we define the construct of objectification in a structured thesaurus involving 5 sub-constructs manifesting through 11 concepts spanning 3 modalities. We introduce the Multimodal Objectifying Gaze (MObyGaze) dataset, made of 20 movies annotated densely by experts for objectification levels and concepts over freely delimited segments: it amounts to 6072 segments over 43 hours of video with fine-grained localization and categorization. We formulate different learning tasks, propose and investigate best ways to learn from the diversity of labels among a low number of annotators, and benchmark recent vision, text and audio models, showing the feasibility of the task. We make our code and our dataset available to the community and described in the Croissant format:this https URL.

View on arXiv
@article{tores2025_2505.22084,
  title={ MObyGaze: a film dataset of multimodal objectification densely annotated by experts },
  author={ Julie Tores and Elisa Ancarani and Lucile Sassatelli and Hui-Yin Wu and Clement Bergman and Lea Andolfi and Victor Ecrement and Remy Sun and Frederic Precioso and Thierry Devars and Magali Guaresi and Virginie Julliard and Sarah Lecossais },
  journal={arXiv preprint arXiv:2505.22084},
  year={ 2025 }
}
Comments on this paper