ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.12857
  4. Cited By
Explaining Image Classifiers with Multiscale Directional Image
  Representation

Explaining Image Classifiers with Multiscale Directional Image Representation

22 November 2022
Stefan Kolek
Robert Windesheim
Héctor Andrade-Loarca
Gitta Kutyniok
Ron Levie
ArXivPDFHTML

Papers citing "Explaining Image Classifiers with Multiscale Directional Image Representation"

30 / 30 papers shown
Title
3VL: Using Trees to Improve Vision-Language Models' Interpretability
3VL: Using Trees to Improve Vision-Language Models' Interpretability
Nir Yellinek
Leonid Karlinsky
Raja Giryes
CoGe
VLM
162
4
0
28 Dec 2023
Cartoon Explanations of Image Classifiers
Cartoon Explanations of Image Classifiers
Stefan Kolek
Duc Anh Nguyen
Ron Levie
Joan Bruna
Gitta Kutyniok
FAtt
72
16
0
07 Oct 2021
In-Distribution Interpretability for Challenging Modalities
In-Distribution Interpretability for Challenging Modalities
Cosmas Heiß
Ron Levie
Cinjon Resnick
Gitta Kutyniok
Joan Bruna
9
7
0
01 Jul 2020
PyTorch: An Imperative Style, High-Performance Deep Learning Library
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
224
42,038
0
03 Dec 2019
Shearlets as Feature Extractor for Semantic Edge Detection: The
  Model-Based and Data-Driven Realm
Shearlets as Feature Extractor for Semantic Edge Detection: The Model-Based and Data-Driven Realm
Héctor Andrade-Loarca
Gitta Kutyniok
Ozan Oktem
28
16
0
27 Nov 2019
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation
  Methods
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAtt
AAML
MLAU
54
809
0
06 Nov 2019
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Ruth C. Fong
Mandela Patrick
Andrea Vedaldi
AAML
50
413
0
18 Oct 2019
A Rate-Distortion Framework for Explaining Neural Network Decisions
A Rate-Distortion Framework for Explaining Neural Network Decisions
Jan Macdonald
S. Wäldchen
Sascha Hauch
Gitta Kutyniok
33
40
0
27 May 2019
Searching for MobileNetV3
Searching for MobileNetV3
Andrew G. Howard
Mark Sandler
Grace Chu
Liang-Chieh Chen
Bo Chen
...
Yukun Zhu
Ruoming Pang
Vijay Vasudevan
Quoc V. Le
Hartwig Adam
261
6,685
0
06 May 2019
Edge, Ridge, and Blob Detection with Symmetric Molecules
Edge, Ridge, and Blob Detection with Symmetric Molecules
Rafael Reisenhofer
E. King
22
17
0
28 Jan 2019
Extraction of digital wavefront sets using applied harmonic analysis and
  deep neural networks
Extraction of digital wavefront sets using applied harmonic analysis and deep neural networks
Héctor Andrade-Loarca
Gitta Kutyniok
Ozan Oktem
P. Petersen
54
15
0
05 Jan 2019
Explaining Image Classifiers by Counterfactual Generation
Explaining Image Classifiers by Counterfactual Generation
C. Chang
Elliot Creager
Anna Goldenberg
David Duvenaud
VLM
37
265
0
20 Jul 2018
A Benchmark for Interpretability Methods in Deep Neural Networks
A Benchmark for Interpretability Methods in Deep Neural Networks
Sara Hooker
D. Erhan
Pieter-Jan Kindermans
Been Kim
FAtt
UQCV
70
673
0
28 Jun 2018
This Looks Like That: Deep Learning for Interpretable Image Recognition
This Looks Like That: Deep Learning for Interpretable Image Recognition
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
165
1,172
0
27 Jun 2018
RISE: Randomized Input Sampling for Explanation of Black-box Models
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
107
1,159
0
19 Jun 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
158
1,817
0
30 Nov 2017
The (Un)reliability of saliency methods
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
76
683
0
02 Nov 2017
Explaining Recurrent Neural Network Predictions in Sentiment Analysis
Explaining Recurrent Neural Network Predictions in Sentiment Analysis
L. Arras
G. Montavon
K. Müller
Wojciech Samek
FAtt
39
353
0
22 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
455
21,459
0
22 May 2017
Real Time Image Saliency for Black Box Classifiers
Real Time Image Saliency for Black Box Classifiers
P. Dabkowski
Y. Gal
42
586
0
22 May 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAtt
AAML
48
1,514
0
11 Apr 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
108
5,920
0
04 Mar 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
205
19,796
0
07 Oct 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
519
16,765
0
16 Feb 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.3K
192,638
0
10 Dec 2015
Evaluating the visualization of what a Deep Neural Network has learned
Evaluating the visualization of what a Deep Neural Network has learned
Wojciech Samek
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
XAI
102
1,189
0
21 Sep 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
738
149,474
0
22 Dec 2014
Striving for Simplicity: The All Convolutional Net
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
166
4,653
0
21 Dec 2014
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
860
99,991
0
04 Sep 2014
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
157
7,252
0
20 Dec 2013
1