Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2308.09437
Cited By
From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space
18 August 2023
Maximilian Dreyer
Frederik Pahde
Christopher J. Anders
Wojciech Samek
Sebastian Lapuschkin
AI4CE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space"
12 / 12 papers shown
Title
Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps
Lukasz Sztukiewicz
Ignacy Stepka
Michał Wiliński
Jerzy Stefanowski
104
0
0
28 Feb 2025
Concept-level Debugging of Part-Prototype Networks
A. Bontempelli
Stefano Teso
Katya Tentori
Fausto Giunchiglia
Andrea Passerini
48
53
0
31 May 2022
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Frederik Pahde
Maximilian Dreyer
Leander Weber
Moritz Weckbecker
Christopher J. Anders
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
100
9
0
07 Feb 2022
Can contrastive learning avoid shortcut solutions?
Joshua Robinson
Li Sun
Ke Yu
Kayhan Batmanghelich
Stefanie Jegelka
S. Sra
SSL
55
143
0
21 Jun 2021
Finding and Fixing Spurious Patterns with Explanations
Gregory Plumb
Marco Tulio Ribeiro
Ameet Talwalkar
60
41
0
03 Jun 2021
Causally motivated Shortcut Removal Using Auxiliary Labels
Maggie Makar
Ben Packer
D. Moldovan
Davis W. Blalock
Yoni Halpern
Alexander DÁmour
OOD
CML
56
72
0
13 May 2021
Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
Laura Rieger
Chandan Singh
W. James Murdoch
Bin Yu
FAtt
68
213
0
30 Sep 2019
BCN20000: Dermoscopic Lesions in the Wild
Marc Combalia
Noel Codella
V. Rotemberg
Brian Helba
Verónica Vilaplana
...
Cristina Carrera
Alicia Barreiro
Allan Halpern
S. Puig
J. Malvehy
50
437
0
06 Aug 2019
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Mingxing Tan
Quoc V. Le
3DV
MedIm
121
17,950
0
28 May 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
71
1,005
0
26 Feb 2019
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
A. Ross
M. C. Hughes
Finale Doshi-Velez
FAtt
108
585
0
10 Mar 2017
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
329
15,825
0
12 Nov 2013
1