ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.08878
  4. Cited By
Less is More: The Influence of Pruning on the Explainability of CNNs

Less is More: The Influence of Pruning on the Explainability of CNNs

17 February 2023
David Weber
F. Merkle
Pascal Schöttle
Stephan Schlögl
Martin Nocker
    FAtt
ArXivPDFHTML

Papers citing "Less is More: The Influence of Pruning on the Explainability of CNNs"

50 / 51 papers shown
Title
Pruning in the Face of Adversaries
Pruning in the Face of Adversaries
F. Merkle
Maximilian Samsinger
Pascal Schöttle
AAML
CVBM
44
3
0
19 Aug 2021
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Ze Liu
Yutong Lin
Yue Cao
Han Hu
Yixuan Wei
Zheng Zhang
Stephen Lin
B. Guo
ViT
423
21,347
0
25 Mar 2021
Socially Responsible AI Algorithms: Issues, Purposes, and Challenges
Socially Responsible AI Algorithms: Issues, Purposes, and Challenges
Lu Cheng
Kush R. Varshney
Huan Liu
FaML
105
150
0
01 Jan 2021
A Survey on the Explainability of Supervised Machine Learning
A Survey on the Explainability of Supervised Machine Learning
Nadia Burkart
Marco F. Huber
FaML
XAI
48
773
0
16 Nov 2020
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
557
40,961
0
22 Oct 2020
Interpretable Machine Learning -- A Brief History, State-of-the-Art and
  Challenges
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TS
AI4CE
71
402
0
19 Oct 2020
Utilizing Explainable AI for Quantization and Pruning of Deep Neural
  Networks
Utilizing Explainable AI for Quantization and Pruning of Deep Neural Networks
Muhammad Sabih
Frank Hannig
J. Teich
MQ
82
24
0
20 Aug 2020
The role of explainability in creating trustworthy artificial
  intelligence for health care: a comprehensive survey of the terminology,
  design choices, and evaluation strategies
The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies
A. Markus
J. Kors
P. Rijnbeek
80
465
0
31 Jul 2020
On quantitative aspects of model interpretability
On quantitative aspects of model interpretability
An-phi Nguyen
María Rodríguez Martínez
43
114
0
15 Jul 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
95
377
0
30 Apr 2020
Streamlining Tensor and Network Pruning in PyTorch
Streamlining Tensor and Network Pruning in PyTorch
Michela Paganini
Jessica Zosa Forde
34
12
0
28 Apr 2020
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
258
1,047
0
06 Mar 2020
Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning
Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning
Seul-Ki Yeom
P. Seegerer
Sebastian Lapuschkin
Alexander Binder
Simon Wiedemann
K. Müller
Wojciech Samek
CVBM
53
207
0
18 Dec 2019
Towards Explainable Deep Neural Networks (xDNN)
Towards Explainable Deep Neural Networks (xDNN)
Plamen Angelov
Eduardo Soares
AAML
61
261
0
05 Dec 2019
Improving Feature Attribution through Input-specific Network Pruning
Improving Feature Attribution through Input-specific Network Pruning
Ashkan Khakzar
Soroosh Baselizadeh
Saurabh Khanduja
Christian Rupprecht
S. T. Kim
Nassir Navab
FAtt
36
11
0
25 Nov 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
116
6,251
0
22 Oct 2019
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical
  XAI
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI
Erico Tjoa
Cuntai Guan
XAI
89
1,446
0
17 Jul 2019
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Mingxing Tan
Quoc V. Le
3DV
MedIm
131
18,106
0
28 May 2019
The State of Sparsity in Deep Neural Networks
The State of Sparsity in Deep Neural Networks
Trevor Gale
Erich Elsen
Sara Hooker
147
758
0
25 Feb 2019
Quantifying Interpretability and Trust in Machine Learning Systems
Quantifying Interpretability and Trust in Machine Learning Systems
Philipp Schmidt
F. Biessmann
45
113
0
20 Jan 2019
Stop Explaining Black Box Machine Learning Models for High Stakes
  Decisions and Use Interpretable Models Instead
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead
Cynthia Rudin
ELM
FaML
50
219
0
26 Nov 2018
Rethinking the Value of Network Pruning
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
36
1,471
0
11 Oct 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
123
1,965
0
08 Oct 2018
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
Yang He
Guoliang Kang
Xuanyi Dong
Yanwei Fu
Yi Yang
AAML
VLM
60
963
0
21 Aug 2018
Techniques for Interpretable Machine Learning
Techniques for Interpretable Machine Learning
Mengnan Du
Ninghao Liu
Xia Hu
FaML
77
1,090
0
31 Jul 2018
A Benchmark for Interpretability Methods in Deep Neural Networks
A Benchmark for Interpretability Methods in Deep Neural Networks
Sara Hooker
D. Erhan
Pieter-Jan Kindermans
Been Kim
FAtt
UQCV
98
681
0
28 Jun 2018
Explaining Explanations: An Overview of Interpretability of Machine
  Learning
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
83
1,858
0
31 May 2018
A Systematic DNN Weight Pruning Framework using Alternating Direction
  Method of Multipliers
A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers
Tianyun Zhang
Shaokai Ye
Kaiqi Zhang
Jian Tang
Wujie Wen
M. Fardad
Yanzhi Wang
57
438
0
10 Apr 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and
  Challenges
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
91
219
0
20 Mar 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
223
3,461
0
09 Mar 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
124
3,954
0
06 Feb 2018
Interpreting Convolutional Neural Networks Through Compression
Interpreting Convolutional Neural Networks Through Compression
R. Abbasi-Asl
Bin Yu
FAtt
30
21
0
07 Nov 2017
Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks
Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks
Aditya Chattopadhyay
Anirban Sarkar
Prantik Howlader
V. Balasubramanian
FAtt
103
2,289
0
30 Oct 2017
Data-Driven Sparse Structure Selection for Deep Neural Networks
Data-Driven Sparse Structure Selection for Deep Neural Networks
Zehao Huang
Naiyan Wang
83
561
0
05 Jul 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
239
4,259
0
22 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,815
0
22 May 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
190
3,869
0
10 Apr 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
175
5,986
0
04 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
382
3,785
0
28 Feb 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
270
19,981
0
07 Oct 2016
European Union regulations on algorithmic decision-making and a "right
  to explanation"
European Union regulations on algorithmic decision-making and a "right to explanation"
B. Goodman
Seth Flaxman
FaML
AILaw
63
1,899
0
28 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
16,931
0
16 Feb 2016
Learning Deep Features for Discriminative Localization
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSL
SSeg
FAtt
243
9,305
0
14 Dec 2015
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.1K
193,814
0
10 Dec 2015
Visualizing Deep Convolutional Neural Networks Using Natural Pre-Images
Visualizing Deep Convolutional Neural Networks Using Natural Pre-Images
Aravindh Mahendran
Andrea Vedaldi
FAtt
68
534
0
07 Dec 2015
Evaluating the visualization of what a Deep Neural Network has learned
Evaluating the visualization of what a Deep Neural Network has learned
Wojciech Samek
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
XAI
132
1,192
0
21 Sep 2015
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
306
6,669
0
08 Jun 2015
Going Deeper with Convolutions
Going Deeper with Convolutions
Christian Szegedy
Wei Liu
Yangqing Jia
P. Sermanet
Scott E. Reed
Dragomir Anguelov
D. Erhan
Vincent Vanhoucke
Andrew Rabinovich
424
43,635
0
17 Sep 2014
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
1.6K
100,330
0
04 Sep 2014
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
303
7,289
0
20 Dec 2013
12
Next