ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.03064
  4. Cited By
Reducing Sentiment Bias in Language Models via Counterfactual Evaluation

Reducing Sentiment Bias in Language Models via Counterfactual Evaluation

8 November 2019
Po-Sen Huang
Huan Zhang
Ray Jiang
Robert Stanforth
Johannes Welbl
Jack W. Rae
Vishal Maini
Dani Yogatama
Pushmeet Kohli
ArXivPDFHTML

Papers citing "Reducing Sentiment Bias in Language Models via Counterfactual Evaluation"

40 / 40 papers shown
Title
Counterfactual-Consistency Prompting for Relative Temporal Understanding in Large Language Models
Counterfactual-Consistency Prompting for Relative Temporal Understanding in Large Language Models
Jongho Kim
Seung-won Hwang
LRM
AI4CE
58
0
0
17 Feb 2025
Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Angelina Wang
Michelle Phan
Daniel E. Ho
Sanmi Koyejo
54
2
0
04 Feb 2025
Unmasking Conversational Bias in AI Multiagent Systems
Unmasking Conversational Bias in AI Multiagent Systems
Simone Mungari
Giuseppe Manco
Luca Maria Aiello
LLMAG
54
0
0
24 Jan 2025
LLMScan: Causal Scan for LLM Misbehavior Detection
LLMScan: Causal Scan for LLM Misbehavior Detection
Mengdi Zhang
Kai Kiat Goh
Peixin Zhang
Jun Sun
Rose Lin Xin
Hongyu Zhang
23
0
0
22 Oct 2024
Collapsed Language Models Promote Fairness
Collapsed Language Models Promote Fairness
Jingxuan Xu
Wuyang Chen
Linyi Li
Yao Zhao
Yunchao Wei
44
0
0
06 Oct 2024
Prompt-Based Bias Calibration for Better Zero/Few-Shot Learning of
  Language Models
Prompt-Based Bias Calibration for Better Zero/Few-Shot Learning of Language Models
Kang He
Yinghan Long
Kaushik Roy
28
2
0
15 Feb 2024
Compositional Capabilities of Autoregressive Transformers: A Study on
  Synthetic, Interpretable Tasks
Compositional Capabilities of Autoregressive Transformers: A Study on Synthetic, Interpretable Tasks
Rahul Ramesh
Ekdeep Singh Lubana
Mikail Khona
Robert P. Dick
Hidenori Tanaka
CoGe
33
6
0
21 Nov 2023
Cultural Alignment in Large Language Models: An Explanatory Analysis
  Based on Hofstede's Cultural Dimensions
Cultural Alignment in Large Language Models: An Explanatory Analysis Based on Hofstede's Cultural Dimensions
Reem I. Masoud
Ziquan Liu
Martin Ferianc
Philip C. Treleaven
Miguel R. D. Rodrigues
24
50
0
25 Aug 2023
A Survey on Fairness in Large Language Models
A Survey on Fairness in Large Language Models
Yingji Li
Mengnan Du
Rui Song
Xin Wang
Ying Wang
ALM
49
59
0
20 Aug 2023
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language
  Models
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language Models
Seraphina Goldfarb-Tarrant
Eddie L. Ungless
Esma Balkir
Su Lin Blodgett
34
9
0
22 May 2023
Bias Beyond English: Counterfactual Tests for Bias in Sentiment Analysis
  in Four Languages
Bias Beyond English: Counterfactual Tests for Bias in Sentiment Analysis in Four Languages
Seraphina Goldfarb-Tarrant
Adam Lopez
Roi Blanco
Diego Marcheggiani
26
13
0
19 May 2023
Who's Thinking? A Push for Human-Centered Evaluation of LLMs using the
  XAI Playbook
Who's Thinking? A Push for Human-Centered Evaluation of LLMs using the XAI Playbook
Teresa Datta
John P. Dickerson
34
10
0
10 Mar 2023
The Capacity for Moral Self-Correction in Large Language Models
The Capacity for Moral Self-Correction in Large Language Models
Deep Ganguli
Amanda Askell
Nicholas Schiefer
Thomas I. Liao
Kamil.e Lukovsiut.e
...
Tom B. Brown
C. Olah
Jack Clark
Sam Bowman
Jared Kaplan
LRM
ReLM
45
158
0
15 Feb 2023
Explaining text classifiers through progressive neighborhood
  approximation with realistic samples
Explaining text classifiers through progressive neighborhood approximation with realistic samples
Yi Cai
Arthur Zimek
Eirini Ntoutsi
Gerhard Wunder
AI4TS
22
0
0
11 Feb 2023
Debiasing Vision-Language Models via Biased Prompts
Debiasing Vision-Language Models via Biased Prompts
Ching-Yao Chuang
Varun Jampani
Yuanzhen Li
Antonio Torralba
Stefanie Jegelka
VLM
30
96
0
31 Jan 2023
Manifestations of Xenophobia in AI Systems
Manifestations of Xenophobia in AI Systems
Nenad Tomašev
J. L. Maynard
Iason Gabriel
24
9
0
15 Dec 2022
LMentry: A Language Model Benchmark of Elementary Language Tasks
LMentry: A Language Model Benchmark of Elementary Language Tasks
Avia Efrat
Or Honovich
Omer Levy
29
19
0
03 Nov 2022
COFFEE: Counterfactual Fairness for Personalized Text Generation in
  Explainable Recommendation
COFFEE: Counterfactual Fairness for Personalized Text Generation in Explainable Recommendation
Nan Wang
Qifan Wang
Yi-Chia Wang
Maziar Sanjabi
Jingzhou Liu
Hamed Firooz
Hongning Wang
Shaoliang Nie
28
6
0
14 Oct 2022
Generating Coherent Drum Accompaniment With Fills And Improvisations
Generating Coherent Drum Accompaniment With Fills And Improvisations
Rishabh A. Dahale
Vaibhav Talwadker
Preeti Rao
Prateek Verma
16
3
0
01 Sep 2022
Unit Testing for Concepts in Neural Networks
Unit Testing for Concepts in Neural Networks
Charles Lovering
Ellie Pavlick
25
28
0
28 Jul 2022
Characteristics of Harmful Text: Towards Rigorous Benchmarking of
  Language Models
Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Maribeth Rauh
John F. J. Mellor
J. Uesato
Po-Sen Huang
Johannes Welbl
...
Amelia Glaese
G. Irving
Iason Gabriel
William S. Isaac
Lisa Anne Hendricks
25
49
0
16 Jun 2022
"I'm sorry to hear that": Finding New Biases in Language Models with a
  Holistic Descriptor Dataset
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
76
129
0
18 May 2022
Detoxifying Language Models with a Toxic Corpus
Detoxifying Language Models with a Toxic Corpus
Yoon A Park
Frank Rudzicz
21
6
0
30 Apr 2022
Identifying and Measuring Token-Level Sentiment Bias in Pre-trained
  Language Models with Prompts
Identifying and Measuring Token-Level Sentiment Bias in Pre-trained Language Models with Prompts
Apoorv Garg
Deval Srivastava
Zhiyang Xu
Lifu Huang
11
5
0
15 Apr 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
313
11,953
0
04 Mar 2022
A Causal Lens for Controllable Text Generation
A Causal Lens for Controllable Text Generation
Zhiting Hu
Erran L. Li
45
59
0
22 Jan 2022
A Survey of Controllable Text Generation using Transformer-based
  Pre-trained Language Models
A Survey of Controllable Text Generation using Transformer-based Pre-trained Language Models
Hanqing Zhang
Haolin Song
Shaoyu Li
Ming Zhou
Dawei Song
43
213
0
14 Jan 2022
The King is Naked: on the Notion of Robustness for Natural Language
  Processing
The King is Naked: on the Notion of Robustness for Natural Language Processing
Emanuele La Malfa
Marta Z. Kwiatkowska
20
28
0
13 Dec 2021
Self-Supervised Representation Learning: Introduction, Advances and
  Challenges
Self-Supervised Representation Learning: Introduction, Advances and Challenges
Linus Ericsson
Henry Gouk
Chen Change Loy
Timothy M. Hospedales
SSL
OOD
AI4TS
34
271
0
18 Oct 2021
Causal Inference in Natural Language Processing: Estimation, Prediction,
  Interpretation and Beyond
Causal Inference in Natural Language Processing: Estimation, Prediction, Interpretation and Beyond
Amir Feder
Katherine A. Keith
Emaad A. Manzoor
Reid Pryzant
Dhanya Sridhar
...
Roi Reichart
Margaret E. Roberts
Brandon M Stewart
Victor Veitch
Diyi Yang
CML
38
234
0
02 Sep 2021
On Measures of Biases and Harms in NLP
On Measures of Biases and Harms in NLP
Sunipa Dev
Emily Sheng
Jieyu Zhao
Aubrie Amstutz
Jiao Sun
...
M. Sanseverino
Jiin Kim
Akihiro Nishi
Nanyun Peng
Kai-Wei Chang
31
80
0
07 Aug 2021
Intersectional Bias in Causal Language Models
Intersectional Bias in Causal Language Models
Liam Magee
Lida Ghahremanlou
K. Soldatić
S. Robertson
191
31
0
16 Jul 2021
Quantifying Social Biases in NLP: A Generalization and Empirical
  Comparison of Extrinsic Fairness Metrics
Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
Paula Czarnowska
Yogarshi Vyas
Kashif Shah
21
104
0
28 Jun 2021
Synthesizing Adversarial Negative Responses for Robust Response Ranking
  and Evaluation
Synthesizing Adversarial Negative Responses for Robust Response Ranking and Evaluation
Prakhar Gupta
Yulia Tsvetkov
Jeffrey P. Bigham
34
22
0
10 Jun 2021
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked
  Language Models
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Nikita Nangia
Clara Vania
Rasika Bhalerao
Samuel R. Bowman
11
642
0
30 Sep 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
15
40,023
0
28 May 2020
CausaLM: Causal Model Explanation Through Counterfactual Language Models
CausaLM: Causal Model Explanation Through Counterfactual Language Models
Amir Feder
Nadav Oved
Uri Shalit
Roi Reichart
CML
LRM
36
156
0
27 May 2020
Beneath the Tip of the Iceberg: Current Challenges and New Directions in
  Sentiment Analysis Research
Beneath the Tip of the Iceberg: Current Challenges and New Directions in Sentiment Analysis Research
Soujanya Poria
Devamanyu Hazarika
Navonil Majumder
Rada Mihalcea
40
207
0
01 May 2020
The Woman Worked as a Babysitter: On Biases in Language Generation
The Woman Worked as a Babysitter: On Biases in Language Generation
Emily Sheng
Kai-Wei Chang
Premkumar Natarajan
Nanyun Peng
214
616
0
03 Sep 2019
Certified Robustness to Adversarial Word Substitutions
Certified Robustness to Adversarial Word Substitutions
Robin Jia
Aditi Raghunathan
Kerem Göksel
Percy Liang
AAML
183
291
0
03 Sep 2019
1