ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.07603
  4. Cited By
Fairness Indicators for Systematic Assessments of Visual Feature
  Extractors

Fairness Indicators for Systematic Assessments of Visual Feature Extractors

15 February 2022
Priya Goyal
Adriana Romero Soriano
C. Hazirbas
Levent Sagun
Nicolas Usunier
    EGVM
ArXiv (abs)PDFHTML

Papers citing "Fairness Indicators for Systematic Assessments of Visual Feature Extractors"

25 / 25 papers shown
Title
Fairness and Bias in Multimodal AI: A Survey
Fairness and Bias in Multimodal AI: A Survey
Tosin Adewumi
Lama Alkhaled
Namrata Gurung
G. V. Boven
Irene Pagliai
117
10
0
27 Jun 2024
Automatic Data Curation for Self-Supervised Learning: A Clustering-Based
  Approach
Automatic Data Curation for Self-Supervised Learning: A Clustering-Based Approach
Huy V. Vo
Vasil Khalidov
Timothée Darcet
Théo Moutakanni
Nikita Smetanin
...
Maxime Oquab
Armand Joulin
Hervé Jégou
Patrick Labatut
Piotr Bojanowski
SSL
161
23
0
24 May 2024
No Filter: Cultural and Socioeconomic Diversity in Contrastive
  Vision-Language Models
No Filter: Cultural and Socioeconomic Diversity in Contrastive Vision-Language Models
Angeline Pouget
Lucas Beyer
Emanuele Bugliarello
Xiao Wang
Andreas Steiner
Xiao-Qi Zhai
Ibrahim Alabdulmohsin
VLM
84
9
0
22 May 2024
Towards Geographic Inclusion in the Evaluation of Text-to-Image Models
Towards Geographic Inclusion in the Evaluation of Text-to-Image Models
Melissa Hall
Samuel J. Bell
Candace Ross
Adina Williams
M. Drozdzal
Adriana Romero Soriano
EGVM
66
5
0
07 May 2024
Annotations on a Budget: Leveraging Geo-Data Similarity to Balance Model
  Performance and Annotation Cost
Annotations on a Budget: Leveraging Geo-Data Similarity to Balance Model Performance and Annotation Cost
Oana Ignat
Longju Bai
Joan Nwatu
Rada Mihalcea
76
6
0
12 Mar 2024
The Bias of Harmful Label Associations in Vision-Language Models
The Bias of Harmful Label Associations in Vision-Language Models
C. Hazirbas
Alicia Sun
Yonathan Efroni
Mark Ibrahim
VLM
74
0
0
11 Feb 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
145
94
0
25 Jan 2024
Incorporating Geo-Diverse Knowledge into Prompting for Increased
  Geographical Robustness in Object Recognition
Incorporating Geo-Diverse Knowledge into Prompting for Increased Geographical Robustness in Object Recognition
Kyle Buettner
Sina Malakouti
Xiang Lorraine Li
Adriana Kovashka
128
3
0
03 Jan 2024
Survey of Social Bias in Vision-Language Models
Survey of Social Bias in Vision-Language Models
Nayeon Lee
Yejin Bang
Holy Lovenia
Samuel Cahyawijaya
Wenliang Dai
Pascale Fung
VLM
126
19
0
24 Sep 2023
FACET: Fairness in Computer Vision Evaluation Benchmark
FACET: Fairness in Computer Vision Evaluation Benchmark
Laura Gustafson
Chloe Rolland
Nikhila Ravi
Quentin Duval
Aaron B. Adcock
Cheng-Yang Fu
Melissa Hall
Candace Ross
VLMEGVM
106
40
0
31 Aug 2023
DIG In: Evaluating Disparities in Image Generations with Indicators for
  Geographic Diversity
DIG In: Evaluating Disparities in Image Generations with Indicators for Geographic Diversity
Melissa Hall
Candace Ross
Adina Williams
Nicolas Carion
M. Drozdzal
Adriana Romero Soriano
EGVM
60
7
0
11 Aug 2023
Challenges and Solutions in AI for All
Challenges and Solutions in AI for All
R. Shams
Didar Zowghi
Muneera Bano
58
4
0
20 Jul 2023
PaLI-X: On Scaling up a Multilingual Vision and Language Model
PaLI-X: On Scaling up a Multilingual Vision and Language Model
Xi Chen
Josip Djolonga
Piotr Padlewski
Basil Mustafa
Soravit Changpinyo
...
Mojtaba Seyedhosseini
A. Angelova
Xiaohua Zhai
N. Houlsby
Radu Soricut
VLM
152
203
0
29 May 2023
DINOv2: Learning Robust Visual Features without Supervision
DINOv2: Learning Robust Visual Features without Supervision
Maxime Oquab
Timothée Darcet
Théo Moutakanni
Huy Q. Vo
Marc Szafraniec
...
Hervé Jégou
Julien Mairal
Patrick Labatut
Armand Joulin
Piotr Bojanowski
VLMCLIPSSL
448
3,529
0
14 Apr 2023
Pinpointing Why Object Recognition Performance Degrades Across Income
  Levels and Geographies
Pinpointing Why Object Recognition Performance Degrades Across Income Levels and Geographies
Laura Gustafson
Megan Richards
Melissa Hall
C. Hazirbas
Diane Bouchacourt
Mark Ibrahim
66
7
0
11 Apr 2023
Overwriting Pretrained Bias with Finetuning Data
Overwriting Pretrained Bias with Finetuning Data
Angelina Wang
Olga Russakovsky
70
33
0
10 Mar 2023
The Casual Conversations v2 Dataset
The Casual Conversations v2 Dataset
Bilal Porgali
Vítor Albiero
Jordan Ryda
Cristian Canton Ferrer
C. Hazirbas
58
2
0
08 Mar 2023
Towards Reliable Assessments of Demographic Disparities in Multi-Label
  Image Classifiers
Towards Reliable Assessments of Demographic Disparities in Multi-Label Image Classifiers
Melissa Hall
Bobbie Chern
Laura Gustafson
Denisse Ventura
Harshad Kulkarni
Candace Ross
Nicolas Usunier
63
6
0
16 Feb 2023
Vision-Language Models Performing Zero-Shot Tasks Exhibit Gender-based
  Disparities
Vision-Language Models Performing Zero-Shot Tasks Exhibit Gender-based Disparities
Melissa Hall
Laura Gustafson
Aaron B. Adcock
Ishan Misra
Candace Ross
VLM
95
24
0
26 Jan 2023
Simplicity Bias Leads to Amplified Performance Disparities
Simplicity Bias Leads to Amplified Performance Disparities
Samuel J. Bell
Levent Sagun
46
13
0
13 Dec 2022
Casual Conversations v2: Designing a large consent-driven dataset to
  measure algorithmic bias and robustness
Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
C. Hazirbas
Yejin Bang
Tiezheng Yu
Parisa Assar
Bilal Porgali
...
Jacqueline Pan
Emily McReynolds
Miranda Bogen
Pascale Fung
Cristian Canton Ferrer
81
8
0
10 Nov 2022
Vision Models Are More Robust And Fair When Pretrained On Uncurated
  Images Without Supervision
Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Priya Goyal
Quentin Duval
Isaac Seessel
Mathilde Caron
Ishan Misra
Levent Sagun
Armand Joulin
Piotr Bojanowski
VLMSSL
128
111
0
16 Feb 2022
Measuring Fairness Under Unawareness of Sensitive Attributes: A
  Quantification-Based Approach
Measuring Fairness Under Unawareness of Sensitive Attributes: A Quantification-Based Approach
Alessandro Fabris
Andrea Esuli
Alejandro Moreo
Fabrizio Sebastiani
70
20
0
17 Sep 2021
Unsupervised Learning of Visual Features by Contrasting Cluster
  Assignments
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
Mathilde Caron
Ishan Misra
Julien Mairal
Priya Goyal
Piotr Bojanowski
Armand Joulin
OCLSSL
332
4,109
0
17 Jun 2020
ClusterFit: Improving Generalization of Visual Representations
ClusterFit: Improving Generalization of Visual Representations
Xueting Yan
Ishan Misra
Abhinav Gupta
Deepti Ghadiyaram
D. Mahajan
SSLVLM
134
133
0
06 Dec 2019
1