ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.06778
  4. Cited By
An Empirical Study on Robustness to Spurious Correlations using
  Pre-trained Language Models

An Empirical Study on Robustness to Spurious Correlations using Pre-trained Language Models

14 July 2020
Lifu Tu
Garima Lalwani
Spandana Gella
He He
    LRM
ArXivPDFHTML

Papers citing "An Empirical Study on Robustness to Spurious Correlations using Pre-trained Language Models"

21 / 121 papers shown
Title
On a Benefit of Mask Language Modeling: Robustness to Simplicity Bias
On a Benefit of Mask Language Modeling: Robustness to Simplicity Bias
Ting-Rui Chiang
35
4
0
11 Oct 2021
Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics
Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics
Prajjwal Bhargava
Aleksandr Drozd
Anna Rogers
102
102
0
04 Oct 2021
Adversarial Examples Generation for Reducing Implicit Gender Bias in
  Pre-trained Models
Adversarial Examples Generation for Reducing Implicit Gender Bias in Pre-trained Models
Wenqian Ye
Fei Xu
Yaojia Huang
Cassie Huang
A. Ji
17
2
0
03 Oct 2021
Symbolic Brittleness in Sequence Models: on Systematic Generalization in
  Symbolic Mathematics
Symbolic Brittleness in Sequence Models: on Systematic Generalization in Symbolic Mathematics
Sean Welleck
Peter West
Jize Cao
Yejin Choi
29
28
0
28 Sep 2021
BiRdQA: A Bilingual Dataset for Question Answering on Tricky Riddles
BiRdQA: A Bilingual Dataset for Question Answering on Tricky Riddles
Yunxiang Zhang
Xiaojun Wan
36
12
0
23 Sep 2021
AdapterHub Playground: Simple and Flexible Few-Shot Learning with
  Adapters
AdapterHub Playground: Simple and Flexible Few-Shot Learning with Adapters
Tilman Beck
Bela Bohlender
Christina Viehmann
Vincent Hane
Yanik Adamson
Jaber Khuri
Jonas Brossmann
Jonas Pfeiffer
Iryna Gurevych
21
15
0
18 Aug 2021
Is My Model Using The Right Evidence? Systematic Probes for Examining
  Evidence-Based Tabular Reasoning
Is My Model Using The Right Evidence? Systematic Probes for Examining Evidence-Based Tabular Reasoning
Vivek Gupta
Riyaz Ahmad Bhat
Atreya Ghosal
Manisha Srivastava
M. Singh
Vivek Srikumar
LMTD
20
18
0
02 Aug 2021
On the Copying Behaviors of Pre-Training for Neural Machine Translation
On the Copying Behaviors of Pre-Training for Neural Machine Translation
Xuebo Liu
Longyue Wang
Derek F. Wong
Liang Ding
Lidia S. Chao
Shuming Shi
Zhaopeng Tu
27
25
0
17 Jul 2021
Robustifying Multi-hop QA through Pseudo-Evidentiality Training
Robustifying Multi-hop QA through Pseudo-Evidentiality Training
Kyungjae Lee
Seung-won Hwang
Sanghyun Han
Dohyeon Lee
OffRL
11
12
0
07 Jul 2021
An Investigation of the (In)effectiveness of Counterfactually Augmented
  Data
An Investigation of the (In)effectiveness of Counterfactually Augmented Data
Nitish Joshi
He He
OODD
19
46
0
01 Jul 2021
Empowering Language Understanding with Counterfactual Reasoning
Empowering Language Understanding with Counterfactual Reasoning
Fuli Feng
Jizhi Zhang
Xiangnan He
Hanwang Zhang
Tat-Seng Chua
LRM
21
33
0
06 Jun 2021
On the Efficacy of Adversarial Data Collection for Question Answering:
  Results from a Large-Scale Randomized Study
On the Efficacy of Adversarial Data Collection for Question Answering: Results from a Large-Scale Randomized Study
Divyansh Kaushik
Douwe Kiela
Zachary Chase Lipton
Wen-tau Yih
AAML
11
36
0
02 Jun 2021
HiddenCut: Simple Data Augmentation for Natural Language Understanding
  with Better Generalization
HiddenCut: Simple Data Augmentation for Natural Language Understanding with Better Generalization
Jiaao Chen
Dinghan Shen
Weizhu Chen
Diyi Yang
BDL
24
47
0
31 May 2021
Improved OOD Generalization via Adversarial Training and Pre-training
Improved OOD Generalization via Adversarial Training and Pre-training
Mingyang Yi
Lu Hou
Jiacheng Sun
Lifeng Shang
Xin Jiang
Qun Liu
Zhi-Ming Ma
VLM
33
83
0
24 May 2021
Memorisation versus Generalisation in Pre-trained Language Models
Memorisation versus Generalisation in Pre-trained Language Models
Michael Tänzer
Sebastian Ruder
Marek Rei
94
50
0
16 Apr 2021
Back to Square One: Artifact Detection, Training and Commonsense
  Disentanglement in the Winograd Schema
Back to Square One: Artifact Detection, Training and Commonsense Disentanglement in the Winograd Schema
Yanai Elazar
Hongming Zhang
Yoav Goldberg
Dan Roth
ReLM
LRM
45
44
0
16 Apr 2021
Supervising Model Attention with Human Explanations for Robust Natural
  Language Inference
Supervising Model Attention with Human Explanations for Robust Natural Language Inference
Joe Stacey
Yonatan Belinkov
Marek Rei
30
45
0
16 Apr 2021
Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU
  Models
Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU Models
Mengnan Du
Varun Manjunatha
R. Jain
Ruchi Deshpande
Franck Dernoncourt
Jiuxiang Gu
Tong Sun
Xia Hu
57
105
0
11 Mar 2021
Bridging Textual and Tabular Data for Cross-Domain Text-to-SQL Semantic
  Parsing
Bridging Textual and Tabular Data for Cross-Domain Text-to-SQL Semantic Parsing
Xi Lin
R. Socher
Caiming Xiong
LMTD
20
207
0
23 Dec 2020
On Transferability of Bias Mitigation Effects in Language Model
  Fine-Tuning
On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning
Xisen Jin
Francesco Barbieri
Brendan Kennedy
Aida Mostafazadeh Davani
Leonardo Neves
Xiang Ren
35
5
0
24 Oct 2020
Towards Debiasing NLU Models from Unknown Biases
Towards Debiasing NLU Models from Unknown Biases
Prasetya Ajie Utama
N. Moosavi
Iryna Gurevych
19
154
0
25 Sep 2020
Previous
123