ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.11708
  4. Cited By
Detect and Perturb: Neutral Rewriting of Biased and Sensitive Text via
  Gradient-based Decoding

Detect and Perturb: Neutral Rewriting of Biased and Sensitive Text via Gradient-based Decoding

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
24 September 2021
Zexue He
Bodhisattwa Prasad Majumder
Julian McAuley
ArXiv (abs)PDFHTML

Papers citing "Detect and Perturb: Neutral Rewriting of Biased and Sensitive Text via Gradient-based Decoding"

15 / 15 papers shown
Quantifying Cognitive Bias Induction in LLM-Generated Content
Quantifying Cognitive Bias Induction in LLM-Generated Content
Abeer Alessa
Param Somane
Akshaya Lakshminarasimhan
Julian Skirzynski
Julian McAuley
J. Echterhoff
162
0
0
03 Jul 2025
GeNRe: A French Gender-Neutral Rewriting System Using Collective Nouns
GeNRe: A French Gender-Neutral Rewriting System Using Collective NounsAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Enzo Doyen
Amalia Todirascu
437
1
0
29 May 2025
DeCAP: Context-Adaptive Prompt Generation for Debiasing Zero-shot Question Answering in Large Language Models
DeCAP: Context-Adaptive Prompt Generation for Debiasing Zero-shot Question Answering in Large Language ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2025
Suyoung Bae
YunSeok Choi
Jee-Hyong Lee
307
0
0
25 Mar 2025
Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes
Hire Me or Not? Examining Language Model's Behavior with Occupation AttributesInternational Conference on Computational Linguistics (COLING), 2024
Damin Zhang
Yi Zhang
Geetanjali Bihani
Julia Taylor Rayz
534
4
0
06 May 2024
Analyzing Sentiment Polarity Reduction in News Presentation through
  Contextual Perturbation and Large Language Models
Analyzing Sentiment Polarity Reduction in News Presentation through Contextual Perturbation and Large Language Models
Alapan Kuila
Somnath Jena
Sudeshna Sarkar
P. Chakrabarti
AAML
205
3
0
03 Feb 2024
Tackling Bias in Pre-trained Language Models: Current Trends and
  Under-represented Societies
Tackling Bias in Pre-trained Language Models: Current Trends and Under-represented Societies
Vithya Yogarajan
Gillian Dobbie
Te Taka Keegan
R. Neuwirth
ALM
415
18
0
03 Dec 2023
Bias and Fairness in Large Language Models: A Survey
Bias and Fairness in Large Language Models: A SurveyComputational Linguistics (CL), 2023
Isabel O. Gallegos
Ryan Rossi
Joe Barrow
Md Mehrab Tanjim
Sungchul Kim
Franck Dernoncourt
Tong Yu
Ruiyi Zhang
Nesreen Ahmed
AILaw
475
1,011
0
02 Sep 2023
Targeted Data Generation: Finding and Fixing Model Weaknesses
Targeted Data Generation: Finding and Fixing Model WeaknessesAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Zexue He
Marco Tulio Ribeiro
Fereshte Khani
220
20
0
28 May 2023
"Nothing Abnormal": Disambiguating Medical Reports via Contrastive
  Knowledge Infusion
"Nothing Abnormal": Disambiguating Medical Reports via Contrastive Knowledge InfusionAAAI Conference on Artificial Intelligence (AAAI), 2023
Zexue He
An Yan
Amilcare Gentili
Julian McAuley
Chun-Nan Hsu
MedIm
339
3
0
15 May 2023
Synthetic Pre-Training Tasks for Neural Machine Translation
Synthetic Pre-Training Tasks for Neural Machine TranslationAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Zexue He
Graeme W. Blackwood
Yikang Shen
Julian McAuley
Rogerio Feris
291
9
0
19 Dec 2022
Style Transfer as Data Augmentation: A Case Study on Named Entity
  Recognition
Style Transfer as Data Augmentation: A Case Study on Named Entity RecognitionConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Shuguang Chen
Leonardo Neves
Thamar Solorio
286
6
0
14 Oct 2022
Language Generation Models Can Cause Harm: So What Can We Do About It?
  An Actionable Survey
Language Generation Models Can Cause Harm: So What Can We Do About It? An Actionable SurveyConference of the European Chapter of the Association for Computational Linguistics (EACL), 2022
Sachin Kumar
Vidhisha Balachandran
Lucille Njoo
Antonios Anastasopoulos
Yulia Tsvetkov
ELM
478
112
0
14 Oct 2022
Controlling Bias Exposure for Fair Interpretable Predictions
Controlling Bias Exposure for Fair Interpretable PredictionsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Zexue He
Yu Wang
Julian McAuley
Bodhisattwa Prasad Majumder
347
23
0
14 Oct 2022
InterFair: Debiasing with Natural Language Feedback for Fair
  Interpretable Predictions
InterFair: Debiasing with Natural Language Feedback for Fair Interpretable PredictionsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Bodhisattwa Prasad Majumder
Zexue He
Julian McAuley
252
7
0
14 Oct 2022
Text Style Transfer for Bias Mitigation using Masked Language Modeling
Text Style Transfer for Bias Mitigation using Masked Language ModelingNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022
E. Tokpo
T. Calders
239
42
0
21 Jan 2022
1
Page 1 of 1