ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.12016
  4. Cited By
Towards falsifiable interpretability research

Towards falsifiable interpretability research

22 October 2020
Matthew L. Leavitt
Ari S. Morcos
    AAML
    AI4CE
ArXivPDFHTML

Papers citing "Towards falsifiable interpretability research"

24 / 24 papers shown
Title
A Mathematical Philosophy of Explanations in Mechanistic Interpretability -- The Strange Science Part I.i
A Mathematical Philosophy of Explanations in Mechanistic Interpretability -- The Strange Science Part I.i
Kola Ayonrinde
Louis Jaburi
MILM
88
1
0
01 May 2025
An Actionability Assessment Tool for Explainable AI
An Actionability Assessment Tool for Explainable AI
Ronal Singh
Tim Miller
L. Sonenberg
Eduardo Velloso
F. Vetere
Piers Howe
Paul Dourish
27
2
0
19 Jun 2024
Acoustic characterization of speech rhythm: going beyond metrics with
  recurrent neural networks
Acoustic characterization of speech rhythm: going beyond metrics with recurrent neural networks
Franccois Deloche
Laurent Bonnasse-Gahot
Judit Gervain
26
0
0
22 Jan 2024
On the Relationship Between Interpretability and Explainability in
  Machine Learning
On the Relationship Between Interpretability and Explainability in Machine Learning
Benjamin Leblanc
Pascal Germain
FaML
29
0
0
20 Nov 2023
Identifying Interpretable Visual Features in Artificial and Biological
  Neural Systems
Identifying Interpretable Visual Features in Artificial and Biological Neural Systems
David A. Klindt
Sophia Sanborn
Francisco Acosta
Frédéric Poitevin
Nina Miolane
MILM
FAtt
44
7
0
17 Oct 2023
Causal Analysis for Robust Interpretability of Neural Networks
Causal Analysis for Robust Interpretability of Neural Networks
Ola Ahmad
Nicolas Béreux
Loïc Baret
V. Hashemi
Freddy Lecue
CML
29
3
0
15 May 2023
The Representational Status of Deep Learning Models
The Representational Status of Deep Learning Models
Eamon Duede
21
0
0
21 Mar 2023
Tracr: Compiled Transformers as a Laboratory for Interpretability
Tracr: Compiled Transformers as a Laboratory for Interpretability
David Lindner
János Kramár
Sebastian Farquhar
Matthew Rahtz
Tom McGrath
Vladimir Mikulik
29
72
0
12 Jan 2023
Higher-order mutual information reveals synergistic sub-networks for
  multi-neuron importance
Higher-order mutual information reveals synergistic sub-networks for multi-neuron importance
Kenzo Clauw
S. Stramaglia
Daniele Marinazzo
SSL
FAtt
30
6
0
01 Nov 2022
SoK: Explainable Machine Learning for Computer Security Applications
SoK: Explainable Machine Learning for Computer Security Applications
A. Nadeem
D. Vos
Clinton Cao
Luca Pajola
Simon Dieck
Robert Baumgartner
S. Verwer
34
40
0
22 Aug 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
47
18
0
31 May 2022
Features of Explainability: How users understand counterfactual and
  causal explanations for categorical and continuous features in XAI
Features of Explainability: How users understand counterfactual and causal explanations for categorical and continuous features in XAI
Greta Warren
Mark T. Keane
R. Byrne
CML
27
22
0
21 Apr 2022
An explainability framework for cortical surface-based deep learning
An explainability framework for cortical surface-based deep learning
Fernanda L. Ribeiro
S. Bollmann
R. Cunnington
A. M. Puckett
FAtt
AAML
MedIm
24
2
0
15 Mar 2022
Investigating the fidelity of explainable artificial intelligence
  methods for applications of convolutional neural networks in geoscience
Investigating the fidelity of explainable artificial intelligence methods for applications of convolutional neural networks in geoscience
Antonios Mamalakis
E. Barnes
I. Ebert‐Uphoff
29
73
0
07 Feb 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
How Well do Feature Visualizations Support Causal Understanding of CNN
  Activations?
How Well do Feature Visualizations Support Causal Understanding of CNN Activations?
Roland S. Zimmermann
Judy Borowski
Robert Geirhos
Matthias Bethge
Thomas S. A. Wallis
Wieland Brendel
FAtt
47
31
0
23 Jun 2021
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Eric Wong
Shibani Santurkar
A. Madry
FAtt
22
88
0
11 May 2021
Neural Network Attribution Methods for Problems in Geoscience: A Novel
  Synthetic Benchmark Dataset
Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset
Antonios Mamalakis
I. Ebert‐Uphoff
E. Barnes
OOD
28
75
0
18 Mar 2021
Do Input Gradients Highlight Discriminative Features?
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
23
57
0
25 Feb 2021
Estimating Example Difficulty Using Variance of Gradients
Estimating Example Difficulty Using Variance of Gradients
Chirag Agarwal
Daniel D'souza
Sara Hooker
210
107
0
26 Aug 2020
Selectivity considered harmful: evaluating the causal impact of class
  selectivity in DNNs
Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs
Matthew L. Leavitt
Ari S. Morcos
58
33
0
03 Mar 2020
Revisiting the Importance of Individual Units in CNNs via Ablation
Revisiting the Importance of Individual Units in CNNs via Ablation
Bolei Zhou
Yiyou Sun
David Bau
Antonio Torralba
FAtt
59
116
0
07 Jun 2018
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,690
0
28 Feb 2017
1