ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.00483
  4. Cited By
Explanation by Progressive Exaggeration

Explanation by Progressive Exaggeration

1 November 2019
Sumedha Singla
Brian Pollack
Junxiang Chen
Kayhan Batmanghelich
    FAtt
    MedIm
ArXivPDFHTML

Papers citing "Explanation by Progressive Exaggeration"

37 / 37 papers shown
Title
Unifying Image Counterfactuals and Feature Attributions with Latent-Space Adversarial Attacks
Unifying Image Counterfactuals and Feature Attributions with Latent-Space Adversarial Attacks
Jeremy Goldwasser
Giles Hooker
AAML
36
0
0
21 Apr 2025
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
Éloi Zablocki
Valentin Gerard
Amaia Cardiel
Eric Gaussier
Matthieu Cord
Eduardo Valle
86
0
0
23 Nov 2024
Global Counterfactual Directions
Global Counterfactual Directions
Bartlomiej Sobieski
P. Biecek
DiffM
58
5
0
18 Apr 2024
Text-to-Image Models for Counterfactual Explanations: a Black-Box
  Approach
Text-to-Image Models for Counterfactual Explanations: a Black-Box Approach
Guillaume Jeanneret
Loïc Simon
Frédéric Jurie
DiffM
30
12
0
14 Sep 2023
Exploring the Lottery Ticket Hypothesis with Explainability Methods:
  Insights into Sparse Network Performance
Exploring the Lottery Ticket Hypothesis with Explainability Methods: Insights into Sparse Network Performance
Shantanu Ghosh
Kayhan Batmanghelich
32
0
0
07 Jul 2023
Adversarial Counterfactual Visual Explanations
Adversarial Counterfactual Visual Explanations
Guillaume Jeanneret
Loïc Simon
F. Jurie
DiffM
41
27
0
17 Mar 2023
Inherently Interpretable Multi-Label Classification Using Class-Specific
  Counterfactuals
Inherently Interpretable Multi-Label Classification Using Class-Specific Counterfactuals
Susu Sun
S. Woerner
Andreas Maier
Lisa M. Koch
Christian F. Baumgartner
FAtt
37
16
0
01 Mar 2023
GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
Zijie J. Wang
J. W. Vaughan
R. Caruana
Duen Horng Chau
HAI
36
21
0
27 Feb 2023
Integrating Earth Observation Data into Causal Inference: Challenges and
  Opportunities
Integrating Earth Observation Data into Causal Inference: Challenges and Opportunities
Connor Jerzak
Fredrik D. Johansson
Adel Daoud
CML
43
11
0
30 Jan 2023
Img2Tab: Automatic Class Relevant Concept Discovery from StyleGAN
  Features for Explainable Image Classification
Img2Tab: Automatic Class Relevant Concept Discovery from StyleGAN Features for Explainable Image Classification
Y. Song
S. K. Shyn
Kwang-su Kim
VLM
26
5
0
16 Jan 2023
Deep Causal Learning for Robotic Intelligence
Deep Causal Learning for Robotic Intelligence
Yong Li
CML
44
5
0
23 Dec 2022
Counterfactual Explanations for Misclassified Images: How Human and
  Machine Explanations Differ
Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ
Eoin Delaney
A. Pakrashi
Derek Greene
Markt. Keane
37
16
0
16 Dec 2022
OCTET: Object-aware Counterfactual Explanations
OCTET: Object-aware Counterfactual Explanations
Mehdi Zemni
Mickaël Chen
Éloi Zablocki
H. Ben-younes
Patrick Pérez
Matthieu Cord
AAML
29
29
0
22 Nov 2022
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks
Stephen Casper
K. Hariharan
Dylan Hadfield-Menell
AAML
26
11
0
18 Nov 2022
Augmentation by Counterfactual Explanation -- Fixing an Overconfident
  Classifier
Augmentation by Counterfactual Explanation -- Fixing an Overconfident Classifier
Sumedha Singla
Nihal Murali
Forough Arabshahi
Sofia Triantafyllou
Kayhan Batmanghelich
CML
59
5
0
21 Oct 2022
Global Concept-Based Interpretability for Graph Neural Networks via
  Neuron Analysis
Global Concept-Based Interpretability for Graph Neural Networks via Neuron Analysis
Xuanyuan Han
Pietro Barbiero
Dobrik Georgiev
Lucie Charlotte Magister
Pietro Lio
MILM
42
41
0
22 Aug 2022
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And
  Dataset
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset
Leon Sixt
M. Schuessler
Oana-Iuliana Popescu
Philipp Weiß
Tim Landgraf
FAtt
34
14
0
25 Apr 2022
Diffusion Models for Counterfactual Explanations
Diffusion Models for Counterfactual Explanations
Guillaume Jeanneret
Loïc Simon
F. Jurie
DiffM
35
55
0
29 Mar 2022
Which Style Makes Me Attractive? Interpretable Control Discovery and
  Counterfactual Explanation on StyleGAN
Which Style Makes Me Attractive? Interpretable Control Discovery and Counterfactual Explanation on StyleGAN
Yangqiu Song
Qiulin Wang
Jiquan Pei
Yu Yang
Xiangyang Ji
CVBM
32
3
0
24 Jan 2022
When less is more: Simplifying inputs aids neural network understanding
When less is more: Simplifying inputs aids neural network understanding
R. Schirrmeister
Rosanne Liu
Sara Hooker
T. Ball
27
5
0
14 Jan 2022
STEEX: Steering Counterfactual Explanations with Semantics
STEEX: Steering Counterfactual Explanations with Semantics
P. Jacob
Éloi Zablocki
H. Ben-younes
Mickaël Chen
P. Pérez
Matthieu Cord
19
43
0
17 Nov 2021
On Quantitative Evaluations of Counterfactuals
On Quantitative Evaluations of Counterfactuals
Frederik Hvilshoj
Alexandros Iosifidis
Ira Assent
19
10
0
30 Oct 2021
Robust Feature-Level Adversaries are Interpretability Tools
Robust Feature-Level Adversaries are Interpretability Tools
Stephen Casper
Max Nadeau
Dylan Hadfield-Menell
Gabriel Kreiman
AAML
53
27
0
07 Oct 2021
Designing Counterfactual Generators using Deep Model Inversion
Designing Counterfactual Generators using Deep Model Inversion
Jayaraman J. Thiagarajan
V. Narayanaswamy
Deepta Rajan
J. Liang
Akshay S. Chaudhari
A. Spanias
DiffM
20
22
0
29 Sep 2021
Two4Two: Evaluating Interpretable Machine Learning - A Synthetic Dataset
  For Controlled Experiments
Two4Two: Evaluating Interpretable Machine Learning - A Synthetic Dataset For Controlled Experiments
M. Schuessler
Philipp Weiß
Leon Sixt
40
3
0
06 May 2021
Explaining in Style: Training a GAN to explain a classifier in
  StyleSpace
Explaining in Style: Training a GAN to explain a classifier in StyleSpace
Oran Lang
Yossi Gandelsman
Michal Yarom
Yoav Wald
G. Elidan
...
William T. Freeman
Phillip Isola
Amir Globerson
Michal Irani
Inbar Mosseri
GAN
45
152
0
27 Apr 2021
Beyond Trivial Counterfactual Explanations with Diverse Valuable
  Explanations
Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations
Pau Rodríguez López
Massimo Caccia
Alexandre Lacoste
L. Zamparo
I. Laradji
Laurent Charlin
David Vazquez
AAML
37
55
0
18 Mar 2021
If Only We Had Better Counterfactual Explanations: Five Key Deficits to
  Rectify in the Evaluation of Counterfactual XAI Techniques
If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques
Mark T. Keane
Eoin M. Kenny
Eoin Delaney
Barry Smyth
CML
27
146
0
26 Feb 2021
Gifsplanation via Latent Shift: A Simple Autoencoder Approach to
  Counterfactual Generation for Chest X-rays
Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Counterfactual Generation for Chest X-rays
Joseph Paul Cohen
Rupert Brooks
Sovann En
Evan Zucker
Anuj Pareek
M. Lungren
Akshay S. Chaudhari
FAtt
MedIm
37
4
0
18 Feb 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
48
170
0
13 Jan 2021
Explaining the Black-box Smoothly- A Counterfactual Approach
Explaining the Black-box Smoothly- A Counterfactual Approach
Junyu Chen
Yong Du
Yufan He
W. Paul Segars
Ye Li
MedIm
FAtt
67
100
0
11 Jan 2021
Concept-based model explanations for Electronic Health Records
Concept-based model explanations for Electronic Health Records
Diana Mincu
Eric Loreaux
Shaobo Hou
Sebastien Baur
Ivan V. Protsyuk
Martin G. Seneviratne
A. Mottram
Nenad Tomašev
Alan Karthikesanlingam
Jessica Schrouff
11
27
0
03 Dec 2020
Counterfactual Explanation and Causal Inference in Service of Robustness
  in Robot Control
Counterfactual Explanation and Causal Inference in Service of Robustness in Robot Control
Simón C. Smith
S. Ramamoorthy
26
13
0
18 Sep 2020
On Generating Plausible Counterfactual and Semi-Factual Explanations for
  Deep Learning
On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning
Eoin M. Kenny
Mark T. Keane
28
99
0
10 Sep 2020
Scientific Discovery by Generating Counterfactuals using Image
  Translation
Scientific Discovery by Generating Counterfactuals using Image Translation
Arunachalam Narayanaswamy
Subhashini Venugopalan
D. Webster
L. Peng
G. Corrado
...
Abigail E. Huang
Siva Balasubramanian
Michael P. Brenner
Phil Q. Nelson
A. Varadarajan
DiffM
MedIm
30
20
0
10 Jul 2020
Deep Structural Causal Models for Tractable Counterfactual Inference
Deep Structural Causal Models for Tractable Counterfactual Inference
Nick Pawlowski
Daniel Coelho De Castro
Ben Glocker
CML
MedIm
33
229
0
11 Jun 2020
A Style-Based Generator Architecture for Generative Adversarial Networks
A Style-Based Generator Architecture for Generative Adversarial Networks
Tero Karras
S. Laine
Timo Aila
309
10,391
0
12 Dec 2018
1