ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.07972
  4. Cited By
Sparse Visual Counterfactual Explanations in Image Space

Sparse Visual Counterfactual Explanations in Image Space

16 May 2022
Valentyn Boreiko
Maximilian Augustin
Francesco Croce
Philipp Berens
Matthias Hein
    BDL
    CML
ArXivPDFHTML

Papers citing "Sparse Visual Counterfactual Explanations in Image Space"

50 / 55 papers shown
Title
From Visual Explanations to Counterfactual Explanations with Latent Diffusion
From Visual Explanations to Counterfactual Explanations with Latent Diffusion
Tung Luu
Nam Le
Duc Le
Bac Le
DiffM
AAML
FAtt
194
0
0
12 Apr 2025
Global Counterfactual Directions
Global Counterfactual Directions
Bartlomiej Sobieski
P. Biecek
DiffM
77
6
0
18 Apr 2024
Discriminative Class Tokens for Text-to-Image Diffusion Models
Discriminative Class Tokens for Text-to-Image Diffusion Models
Idan Schwartz
Vésteinn Snaebjarnarson
Hila Chefer
Ryan Cotterell
Serge Belongie
Lior Wolf
Sagie Benaim
55
10
0
30 Mar 2023
Diffusion Causal Models for Counterfactual Estimation
Diffusion Causal Models for Counterfactual Estimation
Pedro Sanchez
Sotirios A. Tsaftaris
DiffM
BDL
68
72
0
21 Feb 2022
GLIDE: Towards Photorealistic Image Generation and Editing with
  Text-Guided Diffusion Models
GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
Alex Nichol
Prafulla Dhariwal
Aditya A. Ramesh
Pranav Shyam
Pamela Mishkin
Bob McGrew
Ilya Sutskever
Mark Chen
323
3,594
0
20 Dec 2021
Blended Diffusion for Text-driven Editing of Natural Images
Blended Diffusion for Text-driven Editing of Natural Images
Omri Avrahami
Dani Lischinski
Ohad Fried
DiffM
101
947
0
29 Nov 2021
Large-scale Unsupervised Semantic Segmentation
Large-scale Unsupervised Semantic Segmentation
Shangqi Gao
Zhong-Yu Li
Ming-Hsuan Yang
Mingg-Ming Cheng
Junwei Han
Philip Torr
UQCV
84
88
0
06 Jun 2021
Diffusion Models Beat GANs on Image Synthesis
Diffusion Models Beat GANs on Image Synthesis
Prafulla Dhariwal
Alex Nichol
206
7,818
0
11 May 2021
Explaining in Style: Training a GAN to explain a classifier in
  StyleSpace
Explaining in Style: Training a GAN to explain a classifier in StyleSpace
Oran Lang
Yossi Gandelsman
Michal Yarom
Yoav Wald
G. Elidan
...
William T. Freeman
Phillip Isola
Amir Globerson
Michal Irani
Inbar Mosseri
GAN
106
154
0
27 Apr 2021
Generating Interpretable Counterfactual Explanations By Implicit
  Minimisation of Epistemic and Aleatoric Uncertainties
Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties
Lisa Schut
Oscar Key
R. McGrath
Luca Costabello
Bogdan Sacaleanu
Medb Corcoran
Y. Gal
CML
93
48
0
16 Mar 2021
Mind the box: $l_1$-APGD for sparse adversarial attacks on image
  classifiers
Mind the box: l1l_1l1​-APGD for sparse adversarial attacks on image classifiers
Francesco Croce
Matthias Hein
AAML
77
55
0
01 Mar 2021
Learning Transferable Visual Models From Natural Language Supervision
Learning Transferable Visual Models From Natural Language Supervision
Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya A. Ramesh
Gabriel Goh
...
Amanda Askell
Pamela Mishkin
Jack Clark
Gretchen Krueger
Ilya Sutskever
CLIP
VLM
871
29,341
0
26 Feb 2021
Using StyleGAN for Visual Interpretability of Deep Learning Models on
  Medical Images
Using StyleGAN for Visual Interpretability of Deep Learning Models on Medical Images
K. Schutte
O. Moindrot
P. Hérent
Jean-Baptiste Schiratti
S. Jégou
FAtt
MedIm
109
60
0
19 Jan 2021
Interpretability and Explainability: A Machine Learning Zoo Mini-tour
Interpretability and Explainability: A Machine Learning Zoo Mini-tour
Ricards Marcinkevics
Julia E. Vogt
XAI
67
120
0
03 Dec 2020
Understanding Failures of Deep Networks via Robust Feature Extraction
Understanding Failures of Deep Networks via Robust Feature Extraction
Sahil Singla
Besmira Nushi
S. Shah
Ece Kamar
Eric Horvitz
FAtt
50
84
0
03 Dec 2020
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
316
702
0
19 Oct 2020
Uncovering the Limits of Adversarial Training against Norm-Bounded
  Adversarial Examples
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Sven Gowal
Chongli Qin
J. Uesato
Timothy A. Mann
Pushmeet Kohli
AAML
47
331
0
07 Oct 2020
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models
Cassidy Laidlaw
Sahil Singla
Soheil Feizi
AAML
OOD
86
187
0
22 Jun 2020
Deep Structural Causal Models for Tractable Counterfactual Inference
Deep Structural Causal Models for Tractable Counterfactual Inference
Nick Pawlowski
Daniel Coelho De Castro
Ben Glocker
CML
MedIm
101
236
0
11 Jun 2020
Adversarial Robustness on In- and Out-Distribution Improves
  Explainability
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
154
102
0
20 Mar 2020
Reliable evaluation of adversarial robustness with an ensemble of
  diverse parameter-free attacks
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
213
1,842
0
03 Mar 2020
Big Transfer (BiT): General Visual Representation Learning
Big Transfer (BiT): General Visual Representation Learning
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
J. Puigcerver
Jessica Yung
Sylvain Gelly
N. Houlsby
MQ
278
1,205
0
24 Dec 2019
AugMix: A Simple Data Processing Method to Improve Robustness and
  Uncertainty
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
Dan Hendrycks
Norman Mu
E. D. Cubuk
Barret Zoph
Justin Gilmer
Balaji Lakshminarayanan
OOD
UQCV
97
1,300
0
05 Dec 2019
Self-training with Noisy Student improves ImageNet classification
Self-training with Noisy Student improves ImageNet classification
Qizhe Xie
Minh-Thang Luong
Eduard H. Hovy
Quoc V. Le
NoLa
304
2,386
0
11 Nov 2019
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation
  Methods
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAtt
AAML
MLAU
68
817
0
06 Nov 2019
Natural Adversarial Examples
Natural Adversarial Examples
Dan Hendrycks
Kevin Zhao
Steven Basart
Jacob Steinhardt
D. Song
OODD
200
1,469
0
16 Jul 2019
Explanations can be manipulated and geometry is to blame
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAML
FAtt
81
331
0
19 Jun 2019
Unlabeled Data Improves Adversarial Robustness
Unlabeled Data Improves Adversarial Robustness
Y. Carmon
Aditi Raghunathan
Ludwig Schmidt
Percy Liang
John C. Duchi
121
752
0
31 May 2019
Explaining Machine Learning Classifiers through Diverse Counterfactual
  Explanations
Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations
R. Mothilal
Amit Sharma
Chenhao Tan
CML
106
1,018
0
19 May 2019
On the Connection Between Adversarial Robustness and Saliency Map
  Interpretability
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
Christian Etmann
Sebastian Lunz
Peter Maass
Carola-Bibiane Schönlieb
AAML
FAtt
58
161
0
10 May 2019
Full-Gradient Representation for Neural Network Visualization
Full-Gradient Representation for Neural Network Visualization
Suraj Srinivas
François Fleuret
MILM
FAtt
70
273
0
02 May 2019
Counterfactual Visual Explanations
Counterfactual Visual Explanations
Yash Goyal
Ziyan Wu
Jan Ernst
Dhruv Batra
Devi Parikh
Stefan Lee
CML
75
510
0
16 Apr 2019
Summit: Scaling Deep Learning Interpretability by Visualizing Activation
  and Attribution Summarizations
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Fred Hohman
Haekyu Park
Caleb Robinson
Duen Horng Chau
FAtt
3DH
HAI
37
217
0
04 Apr 2019
Approximating CNNs with Bag-of-local-Features models works surprisingly
  well on ImageNet
Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
Wieland Brendel
Matthias Bethge
SSL
FAtt
84
561
0
20 Mar 2019
A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks
A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks
Jinghui Chen
Dongruo Zhou
Jinfeng Yi
Quanquan Gu
AAML
53
68
0
27 Nov 2018
Grounding Visual Explanations
Grounding Visual Explanations
Lisa Anne Hendricks
Ronghang Hu
Trevor Darrell
Zeynep Akata
FAtt
53
227
0
25 Jul 2018
Explaining Image Classifiers by Counterfactual Generation
Explaining Image Classifiers by Counterfactual Generation
C. Chang
Elliot Creager
Anna Goldenberg
David Duvenaud
VLM
70
264
0
20 Jul 2018
Recognition in Terra Incognita
Recognition in Terra Incognita
Sara Beery
Grant Van Horn
Pietro Perona
90
847
0
13 Jul 2018
Do CIFAR-10 Classifiers Generalize to CIFAR-10?
Do CIFAR-10 Classifiers Generalize to CIFAR-10?
Benjamin Recht
Rebecca Roelofs
Ludwig Schmidt
Vaishaal Shankar
OOD
FedML
ELM
166
409
0
01 Jun 2018
Robustness May Be at Odds with Accuracy
Robustness May Be at Odds with Accuracy
Dimitris Tsipras
Shibani Santurkar
Logan Engstrom
Alexander Turner
Aleksander Madry
AAML
99
1,778
0
30 May 2018
Explanations based on the Missing: Towards Contrastive Explanations with
  Pertinent Negatives
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
Amit Dhurandhar
Pin-Yu Chen
Ronny Luss
Chun-Chen Tu
Pai-Shun Ting
Karthikeyan Shanmugam
Payel Das
FAtt
113
589
0
21 Feb 2018
Counterfactual Explanations without Opening the Black Box: Automated
  Decisions and the GDPR
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
100
2,350
0
01 Nov 2017
Interpretation of Neural Networks is Fragile
Interpretation of Neural Networks is Fragile
Amirata Ghorbani
Abubakar Abid
James Zou
FAtt
AAML
126
865
0
29 Oct 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
239
4,259
0
22 Jun 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
301
12,060
0
19 Jun 2017
On Calibration of Modern Neural Networks
On Calibration of Modern Neural Networks
Chuan Guo
Geoff Pleiss
Yu Sun
Kilian Q. Weinberger
UQCV
294
5,825
0
14 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,864
0
22 May 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
289
19,981
0
07 Oct 2016
Generating Visual Explanations
Generating Visual Explanations
Lisa Anne Hendricks
Zeynep Akata
Marcus Rohrbach
Jeff Donahue
Bernt Schiele
Trevor Darrell
VLM
FAtt
81
618
0
28 Mar 2016
Identity Mappings in Deep Residual Networks
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
354
10,180
0
16 Mar 2016
12
Next