ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.03212
  4. Cited By
Influence Tuning: Demoting Spurious Correlations via Instance
  Attribution and Instance-Driven Updates

Influence Tuning: Demoting Spurious Correlations via Instance Attribution and Instance-Driven Updates

7 October 2021
Xiaochuang Han
Yulia Tsvetkov
    TDI
ArXivPDFHTML

Papers citing "Influence Tuning: Demoting Spurious Correlations via Instance Attribution and Instance-Driven Updates"

25 / 25 papers shown
Title
A Simple Remedy for Dataset Bias via Self-Influence: A Mislabeled Sample
  Perspective
A Simple Remedy for Dataset Bias via Self-Influence: A Mislabeled Sample Perspective
Yeonsung Jung
Jaeyun Song
J. Yang
Jin-Hwa Kim
Sung-Yub Kim
Eunho Yang
40
2
0
01 Nov 2024
Scalable Influence and Fact Tracing for Large Language Model Pretraining
Scalable Influence and Fact Tracing for Large Language Model Pretraining
Tyler A. Chang
Dheeraj Rajagopal
Tolga Bolukbasi
Lucas Dixon
Ian Tenney
TDI
35
2
0
22 Oct 2024
Helpful or Harmful Data? Fine-tuning-free Shapley Attribution for
  Explaining Language Model Predictions
Helpful or Harmful Data? Fine-tuning-free Shapley Attribution for Explaining Language Model Predictions
Jingtan Wang
Xiaoqiang Lin
Rui Qiao
Chuan-Sheng Foo
Bryan Kian Hsiang Low
TDI
37
3
0
07 Jun 2024
Spurious Correlations in Machine Learning: A Survey
Spurious Correlations in Machine Learning: A Survey
Wenqian Ye
Guangtao Zheng
Xu Cao
Yunsheng Ma
Aidong Zhang
OOD
AAML
CML
39
34
0
20 Feb 2024
In-Context Learning Demonstration Selection via Influence Analysis
In-Context Learning Demonstration Selection via Influence Analysis
Vinay M.S.
Minh-Hao Van
Xintao Wu
34
3
0
19 Feb 2024
Unlearning Traces the Influential Training Data of Language Models
Unlearning Traces the Influential Training Data of Language Models
Masaru Isonuma
Ivan Titov
MU
29
6
0
26 Jan 2024
Towards Faithful Explanations for Text Classification with Robustness
  Improvement and Explanation Guided Training
Towards Faithful Explanations for Text Classification with Robustness Improvement and Explanation Guided Training
Dongfang Li
Baotian Hu
Qingcai Chen
Shan He
31
4
0
29 Dec 2023
Error Norm Truncation: Robust Training in the Presence of Data Noise for
  Text Generation Models
Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models
Tianjian Li
Haoran Xu
Philipp Koehn
Daniel Khashabi
Kenton W. Murray
38
4
0
02 Oct 2023
Simfluence: Modeling the Influence of Individual Training Examples by
  Simulating Training Runs
Simfluence: Modeling the Influence of Individual Training Examples by Simulating Training Runs
Kelvin Guu
Albert Webson
Ellie Pavlick
Lucas Dixon
Ian Tenney
Tolga Bolukbasi
TDI
70
33
0
14 Mar 2023
How Many and Which Training Points Would Need to be Removed to Flip this
  Prediction?
How Many and Which Training Points Would Need to be Removed to Flip this Prediction?
Jinghan Yang
Sarthak Jain
Byron C. Wallace
14
9
0
04 Feb 2023
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Yuyang Gao
Siyi Gu
Junji Jiang
S. Hong
Dazhou Yu
Liang Zhao
29
39
0
07 Dec 2022
Understanding Text Classification Data and Models Using Aggregated Input
  Salience
Understanding Text Classification Data and Models Using Aggregated Input Salience
Sebastian Ebert
Alice Shoshana Jakobovits
Katja Filippova
FAtt
22
3
0
10 Nov 2022
Influence Functions for Sequence Tagging Models
Influence Functions for Sequence Tagging Models
Sarthak Jain
Varun Manjunatha
Byron C. Wallace
A. Nenkova
TDI
35
8
0
25 Oct 2022
Language Generation Models Can Cause Harm: So What Can We Do About It?
  An Actionable Survey
Language Generation Models Can Cause Harm: So What Can We Do About It? An Actionable Survey
Sachin Kumar
Vidhisha Balachandran
Lucille Njoo
Antonios Anastasopoulos
Yulia Tsvetkov
ELM
77
85
0
14 Oct 2022
Shortcut Learning of Large Language Models in Natural Language
  Understanding
Shortcut Learning of Large Language Models in Natural Language Understanding
Mengnan Du
Fengxiang He
Na Zou
Dacheng Tao
Xia Hu
KELM
OffRL
34
84
0
25 Aug 2022
Probing Classifiers are Unreliable for Concept Removal and Detection
Probing Classifiers are Unreliable for Concept Removal and Detection
Abhinav Kumar
Chenhao Tan
Amit Sharma
AAML
28
20
0
08 Jul 2022
ORCA: Interpreting Prompted Language Models via Locating Supporting Data
  Evidence in the Ocean of Pretraining Data
ORCA: Interpreting Prompted Language Models via Locating Supporting Data Evidence in the Ocean of Pretraining Data
Xiaochuang Han
Yulia Tsvetkov
24
27
0
25 May 2022
Learning to Ignore Adversarial Attacks
Learning to Ignore Adversarial Attacks
Yiming Zhang
Yan Zhou
Samuel Carton
Chenhao Tan
48
2
0
23 May 2022
Towards Tracing Factual Knowledge in Language Models Back to the
  Training Data
Towards Tracing Factual Knowledge in Language Models Back to the Training Data
Ekin Akyürek
Tolga Bolukbasi
Frederick Liu
Binbin Xiong
Ian Tenney
Jacob Andreas
Kelvin Guu
HILM
27
11
0
23 May 2022
Speaker Information Can Guide Models to Better Inductive Biases: A Case
  Study On Predicting Code-Switching
Speaker Information Can Guide Models to Better Inductive Biases: A Case Study On Predicting Code-Switching
Alissa Ostapenko
S. Wintner
Melinda Fricke
Yulia Tsvetkov
34
5
0
16 Mar 2022
First is Better Than Last for Language Data Influence
First is Better Than Last for Language Data Influence
Chih-Kuan Yeh
Ankur Taly
Mukund Sundararajan
Frederick Liu
Pradeep Ravikumar
TDI
30
20
0
24 Feb 2022
WANLI: Worker and AI Collaboration for Natural Language Inference
  Dataset Creation
WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation
Alisa Liu
Swabha Swayamdipta
Noah A. Smith
Yejin Choi
58
211
0
16 Jan 2022
Combining Feature and Instance Attribution to Detect Artifacts
Combining Feature and Instance Attribution to Detect Artifacts
Pouya Pezeshkpour
Sarthak Jain
Sameer Singh
Byron C. Wallace
TDI
18
43
0
01 Jul 2021
SelfExplain: A Self-Explaining Architecture for Neural Text Classifiers
SelfExplain: A Self-Explaining Architecture for Neural Text Classifiers
Dheeraj Rajagopal
Vidhisha Balachandran
Eduard H. Hovy
Yulia Tsvetkov
MILM
SSL
FAtt
AI4TS
11
65
0
23 Mar 2021
An Investigation of Why Overparameterization Exacerbates Spurious
  Correlations
An Investigation of Why Overparameterization Exacerbates Spurious Correlations
Shiori Sagawa
Aditi Raghunathan
Pang Wei Koh
Percy Liang
152
371
0
09 May 2020
1