ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.00483
  4. Cited By
Explanation by Progressive Exaggeration
v1v2v3 (latest)

Explanation by Progressive Exaggeration

1 November 2019
Sumedha Singla
Brian Pollack
Junxiang Chen
Kayhan Batmanghelich
    FAttMedIm
ArXiv (abs)PDFHTML

Papers citing "Explanation by Progressive Exaggeration"

50 / 79 papers shown
Title
Unifying Image Counterfactuals and Feature Attributions with Latent-Space Adversarial Attacks
Unifying Image Counterfactuals and Feature Attributions with Latent-Space Adversarial Attacks
Jeremy Goldwasser
Giles Hooker
AAML
96
0
0
21 Apr 2025
Explaining Low Perception Model Competency with High-Competency Counterfactuals
Explaining Low Perception Model Competency with High-Competency Counterfactuals
Sara Pohland
Claire Tomlin
DiffMAAML
100
0
0
07 Apr 2025
PRISM: High-Resolution & Precise Counterfactual Medical Image Generation using Language-guided Stable Diffusion
PRISM: High-Resolution & Precise Counterfactual Medical Image Generation using Language-guided Stable Diffusion
Amar Kumar
Anita Kriz
Mohammad Havaei
Tal Arbel
MedIm
119
3
0
28 Feb 2025
DiffEx: Explaining a Classifier with Diffusion Models to Identify Microscopic Cellular Variations
DiffEx: Explaining a Classifier with Diffusion Models to Identify Microscopic Cellular Variations
Anis Bourou
Saranga Kingkor Mahanta
Thomas Boyer
Valérie Mezger
Auguste Genovesio
MedIm
90
0
0
12 Feb 2025
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
Éloi Zablocki
Valentin Gerard
Amaia Cardiel
Eric Gaussier
Matthieu Cord
Eduardo Valle
170
0
0
23 Nov 2024
Counterfactual Explanations via Riemannian Latent Space Traversal
Counterfactual Explanations via Riemannian Latent Space Traversal
Paraskevas Pegios
Aasa Feragen
Andreas Abildtrup Hansen
Georgios Arvanitidis
BDL
85
4
0
04 Nov 2024
Rethinking Visual Counterfactual Explanations Through Region Constraint
Rethinking Visual Counterfactual Explanations Through Region Constraint
Bartlomiej Sobieski
Jakub Grzywaczewski
Bartlomiej Sadlej
Matthew Tivnan
P. Biecek
CML
81
3
0
16 Oct 2024
Unsupervised Model Diagnosis
Unsupervised Model Diagnosis
Yinong Wang
Eileen Li
Jinqi Luo
Zhaoning Wang
Fernando de la Torre
AAML
79
2
0
08 Oct 2024
TACE: Tumor-Aware Counterfactual Explanations
TACE: Tumor-Aware Counterfactual Explanations
Eleonora Beatrice Rossi
Eleonora Lopez
Danilo Comminiello
MedIm
38
0
0
19 Sep 2024
The Gaussian Discriminant Variational Autoencoder (GdVAE): A
  Self-Explainable Model with Counterfactual Explanations
The Gaussian Discriminant Variational Autoencoder (GdVAE): A Self-Explainable Model with Counterfactual Explanations
Anselm Haselhoff
Kevin Trelenberg
Fabian Küppers
Jonas Schneider
68
2
0
19 Sep 2024
Methodological Explainability Evaluation of an Interpretable Deep
  Learning Model for Post-Hepatectomy Liver Failure Prediction Incorporating
  Counterfactual Explanations and Layerwise Relevance Propagation: A
  Prospective In Silico Trial
Methodological Explainability Evaluation of an Interpretable Deep Learning Model for Post-Hepatectomy Liver Failure Prediction Incorporating Counterfactual Explanations and Layerwise Relevance Propagation: A Prospective In Silico Trial
Xian Zhong
Zohaib Salahuddin
Yi Chen
Henry C. Woodruff
H. Long
...
Lili Chen
Dongming Li
Xiaoyan Xie
Manxia Lin
Philippe Lambin
90
2
0
07 Aug 2024
Attri-Net: A Globally and Locally Inherently Interpretable Model for
  Multi-Label Classification Using Class-Specific Counterfactuals
Attri-Net: A Globally and Locally Inherently Interpretable Model for Multi-Label Classification Using Class-Specific Counterfactuals
Susu Sun
S. Woerner
Andreas Maier
Lisa M. Koch
Christian F. Baumgartner
FAtt
81
2
0
08 Jun 2024
Global Counterfactual Directions
Global Counterfactual Directions
Bartlomiej Sobieski
P. Biecek
DiffM
166
7
0
18 Apr 2024
Stochastic Amortization: A Unified Approach to Accelerate Feature and
  Data Attribution
Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution
Ian Covert
Chanwoo Kim
Su-In Lee
James Zou
Tatsunori Hashimoto
TDI
124
11
0
29 Jan 2024
Fast Diffusion-Based Counterfactuals for Shortcut Removal and Generation
Fast Diffusion-Based Counterfactuals for Shortcut Removal and Generation
Nina Weng
Paraskevas Pegios
Eike Petersen
Aasa Feragen
Siavash Bigdeli
MedImCML
69
13
0
21 Dec 2023
Reconstruction of Patient-Specific Confounders in AI-based Radiologic
  Image Interpretation using Generative Pretraining
Reconstruction of Patient-Specific Confounders in AI-based Radiologic Image Interpretation using Generative Pretraining
T. Han
Laura vZigutyt.e
L. Huck
M. Huppertz
R. Siepmann
...
Firas Khader
Christiane Kuhl
S. Nebelung
Jakob Kather
Daniel Truhn
MedIm
91
3
0
29 Sep 2023
Text-to-Image Models for Counterfactual Explanations: a Black-Box
  Approach
Text-to-Image Models for Counterfactual Explanations: a Black-Box Approach
Guillaume Jeanneret
Loïc Simon
Frédéric Jurie
DiffM
105
13
0
14 Sep 2023
Diffusion-based Visual Counterfactual Explanations -- Towards Systematic
  Quantitative Evaluation
Diffusion-based Visual Counterfactual Explanations -- Towards Systematic Quantitative Evaluation
Philipp Vaeth
Alexander M. Fruehwald
Benjamin Paassen
Magda Gregorova
DiffM
52
5
0
11 Aug 2023
Right for the Wrong Reason: Can Interpretable ML Techniques Detect
  Spurious Correlations?
Right for the Wrong Reason: Can Interpretable ML Techniques Detect Spurious Correlations?
Susu Sun
Lisa M. Koch
Christian F. Baumgartner
121
16
0
23 Jul 2023
Exploring the Lottery Ticket Hypothesis with Explainability Methods:
  Insights into Sparse Network Performance
Exploring the Lottery Ticket Hypothesis with Explainability Methods: Insights into Sparse Network Performance
Shantanu Ghosh
Kayhan Batmanghelich
75
1
0
07 Jul 2023
Dividing and Conquering a BlackBox to a Mixture of Interpretable Models:
  Route, Interpret, Repeat
Dividing and Conquering a BlackBox to a Mixture of Interpretable Models: Route, Interpret, Repeat
Shantanu Ghosh
K. Yu
Forough Arabshahi
Kayhan Batmanghelich
MoE
74
14
0
07 Jul 2023
On the Impact of Knowledge Distillation for Model Interpretability
On the Impact of Knowledge Distillation for Model Interpretability
Hyeongrok Han
Siwon Kim
Hyun-Soo Choi
Sungroh Yoon
77
5
0
25 May 2023
Adversarial Counterfactual Visual Explanations
Adversarial Counterfactual Visual Explanations
Guillaume Jeanneret
Loïc Simon
F. Jurie
DiffM
117
29
0
17 Mar 2023
Inherently Interpretable Multi-Label Classification Using Class-Specific
  Counterfactuals
Inherently Interpretable Multi-Label Classification Using Class-Specific Counterfactuals
Susu Sun
S. Woerner
Andreas Maier
Lisa M. Koch
Christian F. Baumgartner
FAtt
132
17
0
01 Mar 2023
GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
Zijie J. Wang
J. W. Vaughan
R. Caruana
Duen Horng Chau
HAI
96
22
0
27 Feb 2023
Efficient XAI Techniques: A Taxonomic Survey
Efficient XAI Techniques: A Taxonomic Survey
Yu-Neng Chuang
Guanchu Wang
Fan Yang
Zirui Liu
Xuanting Cai
Mengnan Du
Helen Zhou
79
36
0
07 Feb 2023
Integrating Earth Observation Data into Causal Inference: Challenges and
  Opportunities
Integrating Earth Observation Data into Causal Inference: Challenges and Opportunities
Connor Jerzak
Fredrik D. Johansson
Adel Daoud
CML
80
12
0
30 Jan 2023
Img2Tab: Automatic Class Relevant Concept Discovery from StyleGAN
  Features for Explainable Image Classification
Img2Tab: Automatic Class Relevant Concept Discovery from StyleGAN Features for Explainable Image Classification
Y. Song
S. K. Shyn
Kwang-su Kim
VLM
79
5
0
16 Jan 2023
Deep Causal Learning for Robotic Intelligence
Deep Causal Learning for Robotic Intelligence
Yongqian Li
CML
84
5
0
23 Dec 2022
Counterfactual Explanations for Misclassified Images: How Human and
  Machine Explanations Differ
Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ
Eoin Delaney
A. Pakrashi
Derek Greene
Markt. Keane
89
18
0
16 Dec 2022
OCTET: Object-aware Counterfactual Explanations
OCTET: Object-aware Counterfactual Explanations
Mehdi Zemni
Mickaël Chen
Éloi Zablocki
H. Ben-younes
Patrick Pérez
Matthieu Cord
AAML
121
29
0
22 Nov 2022
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks
Stephen Casper
K. Hariharan
Dylan Hadfield-Menell
AAML
111
11
0
18 Nov 2022
A Rigorous Study Of The Deep Taylor Decomposition
A Rigorous Study Of The Deep Taylor Decomposition
Leon Sixt
Tim Landgraf
FAttAAML
48
4
0
14 Nov 2022
Augmentation by Counterfactual Explanation -- Fixing an Overconfident
  Classifier
Augmentation by Counterfactual Explanation -- Fixing an Overconfident Classifier
Sumedha Singla
Nihal Murali
Forough Arabshahi
Sofia Triantafyllou
Kayhan Batmanghelich
CML
134
5
0
21 Oct 2022
Global Concept-Based Interpretability for Graph Neural Networks via
  Neuron Analysis
Global Concept-Based Interpretability for Graph Neural Networks via Neuron Analysis
Xuanyuan Han
Pietro Barbiero
Dobrik Georgiev
Lucie Charlotte Magister
Pietro Lio
MILM
100
42
0
22 Aug 2022
Towards a More Rigorous Science of Blindspot Discovery in Image
  Classification Models
Towards a More Rigorous Science of Blindspot Discovery in Image Classification Models
Gregory Plumb
Nari Johnson
Ángel Alexander Cabrera
Ameet Talwalkar
100
6
0
08 Jul 2022
The Manifold Hypothesis for Gradient-Based Explanations
The Manifold Hypothesis for Gradient-Based Explanations
Sebastian Bordt
Uddeshya Upadhyay
Zeynep Akata
U. V. Luxburg
FAttAAML
81
14
0
15 Jun 2022
Estimating Causal Effects Under Image Confounding Bias with an
  Application to Poverty in Africa
Estimating Causal Effects Under Image Confounding Bias with an Application to Poverty in Africa
Connor Jerzak
Fredrik D. Johansson
Adel Daoud
CML
99
6
0
13 Jun 2022
Diffeomorphic Counterfactuals with Generative Models
Diffeomorphic Counterfactuals with Generative Models
Ann-Kathrin Dombrowski
Jan E. Gerken
Klaus-Robert Muller
Pan Kessel
DiffMBDL
131
17
0
10 Jun 2022
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And
  Dataset
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset
Leon Sixt
M. Schuessler
Oana-Iuliana Popescu
Philipp Weiß
Tim Landgraf
FAtt
69
15
0
25 Apr 2022
Diffusion Models for Counterfactual Explanations
Diffusion Models for Counterfactual Explanations
Guillaume Jeanneret
Loïc Simon
F. Jurie
DiffM
122
61
0
29 Mar 2022
Making Heads or Tails: Towards Semantically Consistent Visual
  Counterfactuals
Making Heads or Tails: Towards Semantically Consistent Visual Counterfactuals
Simon Vandenhende
D. Mahajan
Filip Radenovic
Deepti Ghadiyaram
83
30
0
24 Mar 2022
Which Style Makes Me Attractive? Interpretable Control Discovery and
  Counterfactual Explanation on StyleGAN
Which Style Makes Me Attractive? Interpretable Control Discovery and Counterfactual Explanation on StyleGAN
Yangqiu Song
Qiulin Wang
Jiquan Pei
Yu Yang
Xiangyang Ji
CVBM
66
3
0
24 Jan 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELMXAI
198
424
0
20 Jan 2022
When less is more: Simplifying inputs aids neural network understanding
When less is more: Simplifying inputs aids neural network understanding
R. Schirrmeister
Rosanne Liu
Sara Hooker
T. Ball
133
5
0
14 Jan 2022
Diverse, Global and Amortised Counterfactual Explanations for
  Uncertainty Estimates
Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates
Dan Ley
Umang Bhatt
Adrian Weller
UQCV
249
23
0
05 Dec 2021
STEEX: Steering Counterfactual Explanations with Semantics
STEEX: Steering Counterfactual Explanations with Semantics
P. Jacob
Éloi Zablocki
H. Ben-younes
Mickaël Chen
P. Pérez
Matthieu Cord
81
45
0
17 Nov 2021
Interpretable ECG classification via a query-based latent space
  traversal (qLST)
Interpretable ECG classification via a query-based latent space traversal (qLST)
Melle B Vessies
Sharvaree P. Vadgama
R. V. D. Leur
P. Doevendans
R. Hassink
Erik J. Bekkers
R. V. Es
16
1
0
14 Nov 2021
On Quantitative Evaluations of Counterfactuals
On Quantitative Evaluations of Counterfactuals
Frederik Hvilshoj
Alexandros Iosifidis
Ira Assent
87
10
0
30 Oct 2021
Counterfactual Explanation of Brain Activity Classifiers using
  Image-to-Image Transfer by Generative Adversarial Network
Counterfactual Explanation of Brain Activity Classifiers using Image-to-Image Transfer by Generative Adversarial Network
Teppei Matsui
Masato Taki
Trung Quang Pham
J. Chikazoe
K. Jimura
DiffMAAML
112
12
0
28 Oct 2021
12
Next