ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.16972
  4. Cited By
Queer In AI: A Case Study in Community-Led Participatory AI

Queer In AI: A Case Study in Community-Led Participatory AI

29 March 2023
AI OrganizersOfQueerin
:
Anaelia Ovalle
Arjun Subramonian
Ashwin Singh
C. Voelcker
Danica J. Sutherland
Davide Locatelli
Eva Breznik
Filip Klubicka
Hang Yuan
J. Hetvi
Huan Zhang
Jaidev Shriram
Kruno Lehman
Luca Soldaini
Maarten Sap
M. Deisenroth
Maria Leonor Pacheco
Maria Ryskina
Martin Mundt
M. Agarwal
Nyx McLean
Pan Xu
Pranav A
Raj Korpan
Ruchira Ray
Sarah Mathew
Sarthak Arora
S. T. John
Tanvi Anand
Vishakha Agrawal
William Agnew
Yanan Long
Zijie J. Wang
Zeerak Talat
Avijit Ghosh
Nathaniel Dennler
Michael Noseworthy
Sharvani Jha
Emi Baylor
Aditya Joshi
Natalia Y. Bilenko
Andrew McNamara
Raphael Gontijo-Lopes
Alex Markham
Evyn Dǒng
Jackie Kay
Manu Saraswat
Nikhil Vytla
Luke Stark
ArXivPDFHTML

Papers citing "Queer In AI: A Case Study in Community-Led Participatory AI"

11 / 11 papers shown
Title
MDIT-Bench: Evaluating the Dual-Implicit Toxicity in Large Multimodal Models
MDIT-Bench: Evaluating the Dual-Implicit Toxicity in Large Multimodal Models
Bohan Jin
Shuhan Qi
Kehai Chen
Xinyi Guo
Xuan Wang
22
0
0
22 May 2025
The Cake that is Intelligence and Who Gets to Bake it: An AI Analogy and its Implications for Participation
The Cake that is Intelligence and Who Gets to Bake it: An AI Analogy and its Implications for Participation
Martin Mundt
Anaelia Ovalle
Felix Friedrich
A Pranav
Subarnaduti Paul
Manuel Brack
Kristian Kersting
William Agnew
507
0
0
05 Feb 2025
GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models
GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models
Kunsheng Tang
Wenbo Zhou
Jie Zhang
Aishan Liu
Gelei Deng
Shuai Li
Peigui Qi
Weiming Zhang
Tianwei Zhang
Nenghai Yu
76
3
0
22 Aug 2024
Subverting machines, fluctuating identities: Re-learning human
  categorization
Subverting machines, fluctuating identities: Re-learning human categorization
Christina T. Lu
Jackie Kay
Kevin R. McKee
36
19
0
27 May 2022
LGBTQ Privacy Concerns on Social Media
LGBTQ Privacy Concerns on Social Media
Christine Geeng
Alexis Hiniker
17
8
0
30 Nov 2021
Documenting Large Webtext Corpora: A Case Study on the Colossal Clean
  Crawled Corpus
Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus
Jesse Dodge
Maarten Sap
Ana Marasović
William Agnew
Gabriel Ilharco
Dirk Groeneveld
Margaret Mitchell
Matt Gardner
AILaw
65
437
0
18 Apr 2021
Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research
Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research
A. Feder Cooper
Ellen Abrams
FaML
52
60
0
01 Feb 2021
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Su Lin Blodgett
Solon Barocas
Hal Daumé
Hanna M. Wallach
89
1,211
0
28 May 2020
Measurement and Fairness
Measurement and Fairness
Abigail Z. Jacobs
Hanna M. Wallach
61
383
0
11 Dec 2019
Toward Gender-Inclusive Coreference Resolution
Toward Gender-Inclusive Coreference Resolution
Yang Trista Cao
Hal Daumé
51
143
0
30 Oct 2019
Certifying and removing disparate impact
Certifying and removing disparate impact
Michael Feldman
Sorelle A. Friedler
John Moeller
C. Scheidegger
Suresh Venkatasubramanian
FaML
129
1,978
0
11 Dec 2014
1