ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.09437
  4. Cited By
From Hope to Safety: Unlearning Biases of Deep Models via Gradient
  Penalization in Latent Space

From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space

18 August 2023
Maximilian Dreyer
Frederik Pahde
Christopher J. Anders
Wojciech Samek
Sebastian Lapuschkin
    AI4CE
ArXivPDFHTML

Papers citing "From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space"

12 / 12 papers shown
Title
Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps
Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps
Lukasz Sztukiewicz
Ignacy Stepka
Michał Wiliński
Jerzy Stefanowski
104
0
0
28 Feb 2025
Concept-level Debugging of Part-Prototype Networks
Concept-level Debugging of Part-Prototype Networks
A. Bontempelli
Stefano Teso
Katya Tentori
Fausto Giunchiglia
Andrea Passerini
48
53
0
31 May 2022
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Frederik Pahde
Maximilian Dreyer
Leander Weber
Moritz Weckbecker
Christopher J. Anders
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
100
9
0
07 Feb 2022
Can contrastive learning avoid shortcut solutions?
Can contrastive learning avoid shortcut solutions?
Joshua Robinson
Li Sun
Ke Yu
Kayhan Batmanghelich
Stefanie Jegelka
S. Sra
SSL
55
143
0
21 Jun 2021
Finding and Fixing Spurious Patterns with Explanations
Finding and Fixing Spurious Patterns with Explanations
Gregory Plumb
Marco Tulio Ribeiro
Ameet Talwalkar
60
41
0
03 Jun 2021
Causally motivated Shortcut Removal Using Auxiliary Labels
Causally motivated Shortcut Removal Using Auxiliary Labels
Maggie Makar
Ben Packer
D. Moldovan
Davis W. Blalock
Yoni Halpern
Alexander DÁmour
OOD
CML
56
72
0
13 May 2021
Interpretations are useful: penalizing explanations to align neural
  networks with prior knowledge
Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
Laura Rieger
Chandan Singh
W. James Murdoch
Bin Yu
FAtt
68
213
0
30 Sep 2019
BCN20000: Dermoscopic Lesions in the Wild
BCN20000: Dermoscopic Lesions in the Wild
Marc Combalia
Noel Codella
V. Rotemberg
Brian Helba
Verónica Vilaplana
...
Cristina Carrera
Alicia Barreiro
Allan Halpern
S. Puig
J. Malvehy
50
437
0
06 Aug 2019
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Mingxing Tan
Quoc V. Le
3DV
MedIm
121
17,950
0
28 May 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really
  Learn
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
71
1,005
0
26 Feb 2019
Right for the Right Reasons: Training Differentiable Models by
  Constraining their Explanations
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
A. Ross
M. C. Hughes
Finale Doshi-Velez
FAtt
108
585
0
10 Mar 2017
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
329
15,825
0
12 Nov 2013
1