ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.10154
  4. Cited By
Stop Explaining Black Box Machine Learning Models for High Stakes
  Decisions and Use Interpretable Models Instead

Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead

26 November 2018
Cynthia Rudin
    ELM
    FaML
ArXivPDFHTML

Papers citing "Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead"

31 / 31 papers shown
Title
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
40
10
0
27 Jul 2024
Learning accurate and interpretable tree-based models
Learning accurate and interpretable tree-based models
Maria-Florina Balcan
Dravyansh Sharma
42
7
0
24 May 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
34
78
0
25 Jan 2024
BELLA: Black box model Explanations by Local Linear Approximations
BELLA: Black box model Explanations by Local Linear Approximations
N. Radulovic
Albert Bifet
Fabian M. Suchanek
FAtt
37
1
0
18 May 2023
Finding the Needle in a Haystack: Unsupervised Rationale Extraction from
  Long Text Classifiers
Finding the Needle in a Haystack: Unsupervised Rationale Extraction from Long Text Classifiers
Kamil Bujel
Andrew Caines
H. Yannakoudakis
Marek Rei
AI4TS
19
1
0
14 Mar 2023
Less is More: The Influence of Pruning on the Explainability of CNNs
Less is More: The Influence of Pruning on the Explainability of CNNs
David Weber
F. Merkle
Pascal Schöttle
Stephan Schlögl
Martin Nocker
FAtt
34
1
0
17 Feb 2023
CI-GNN: A Granger Causality-Inspired Graph Neural Network for
  Interpretable Brain Network-Based Psychiatric Diagnosis
CI-GNN: A Granger Causality-Inspired Graph Neural Network for Interpretable Brain Network-Based Psychiatric Diagnosis
Kaizhong Zheng
Shujian Yu
Badong Chen
CML
33
31
0
04 Jan 2023
"Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI
"Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI
Leilani H. Gilpin
Andrew R. Paley
M. A. Alam
Sarah Spurlock
Kristian J. Hammond
XAI
26
6
0
27 Jun 2022
Interpretation of Black Box NLP Models: A Survey
Interpretation of Black Box NLP Models: A Survey
Shivani Choudhary
N. Chatterjee
S. K. Saha
FAtt
34
10
0
31 Mar 2022
Analogies and Feature Attributions for Model Agnostic Explanation of
  Similarity Learners
Analogies and Feature Attributions for Model Agnostic Explanation of Similarity Learners
K. Ramamurthy
Amit Dhurandhar
Dennis L. Wei
Zaid Bin Tariq
FAtt
38
3
0
02 Feb 2022
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Alon Jacovi
Jasmijn Bastings
Sebastian Gehrmann
Yoav Goldberg
Katja Filippova
36
15
0
27 Jan 2022
Foundations of Symbolic Languages for Model Interpretability
Foundations of Symbolic Languages for Model Interpretability
Marcelo Arenas
Daniel Baez
Pablo Barceló
Jorge A. Pérez
Bernardo Subercaseaux
ReLM
LRM
21
24
0
05 Oct 2021
Combining Transformers with Natural Language Explanations
Combining Transformers with Natural Language Explanations
Federico Ruggeri
Marco Lippi
Paolo Torroni
25
1
0
02 Sep 2021
Local Interpretations for Explainable Natural Language Processing: A
  Survey
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
33
48
0
20 Mar 2021
Enforcing Interpretability and its Statistical Impacts: Trade-offs
  between Accuracy and Interpretability
Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability
Gintare Karolina Dziugaite
Shai Ben-David
Daniel M. Roy
FaML
17
38
0
26 Oct 2020
Generating End-to-End Adversarial Examples for Malware Classifiers Using
  Explainability
Generating End-to-End Adversarial Examples for Malware Classifiers Using Explainability
Ishai Rosenberg
Shai Meir
J. Berrebi
I. Gordon
Guillaume Sicard
Eli David
AAML
SILM
11
25
0
28 Sep 2020
Conceptual Metaphors Impact Perceptions of Human-AI Collaboration
Conceptual Metaphors Impact Perceptions of Human-AI Collaboration
Pranav Khadpe
Ranjay Krishna
Fei-Fei Li
Jeffrey T. Hancock
Michael S. Bernstein
25
105
0
05 Aug 2020
Learning Global Transparent Models Consistent with Local Contrastive
  Explanations
Learning Global Transparent Models Consistent with Local Contrastive Explanations
Tejaswini Pedapati
Avinash Balakrishnan
Karthikeyan Shanmugam
Amit Dhurandhar
FAtt
22
0
0
19 Feb 2020
Algorithmic Recourse: from Counterfactual Explanations to Interventions
Algorithmic Recourse: from Counterfactual Explanations to Interventions
Amir-Hossein Karimi
Bernhard Schölkopf
Isabel Valera
CML
24
337
0
14 Feb 2020
Exploring Benefits of Transfer Learning in Neural Machine Translation
Exploring Benefits of Transfer Learning in Neural Machine Translation
Tom Kocmi
19
17
0
06 Jan 2020
Dirichlet uncertainty wrappers for actionable algorithm accuracy
  accountability and auditability
Dirichlet uncertainty wrappers for actionable algorithm accuracy accountability and auditability
José Mena
O. Pujol
Jordi Vitrià
21
8
0
29 Dec 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
37
6,110
0
22 Oct 2019
Visualizing Image Content to Explain Novel Image Discovery
Visualizing Image Content to Explain Novel Image Discovery
Jake H. Lee
K. Wagstaff
18
3
0
14 Aug 2019
Model-Agnostic Counterfactual Explanations for Consequential Decisions
Model-Agnostic Counterfactual Explanations for Consequential Decisions
Amir-Hossein Karimi
Gilles Barthe
Borja Balle
Isabel Valera
44
317
0
27 May 2019
Hybrid Predictive Model: When an Interpretable Model Collaborates with a
  Black-box Model
Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model
Tong Wang
Qihang Lin
38
19
0
10 May 2019
The Scientific Method in the Science of Machine Learning
The Scientific Method in the Science of Machine Learning
Jessica Zosa Forde
Michela Paganini
24
35
0
24 Apr 2019
VINE: Visualizing Statistical Interactions in Black Box Models
VINE: Visualizing Statistical Interactions in Black Box Models
M. Britton
FAtt
17
21
0
01 Apr 2019
Interpreting Neural Networks Using Flip Points
Interpreting Neural Networks Using Flip Points
Roozbeh Yousefzadeh
D. O’Leary
AAML
FAtt
22
17
0
21 Mar 2019
SAFE ML: Surrogate Assisted Feature Extraction for Model Learning
SAFE ML: Surrogate Assisted Feature Extraction for Model Learning
Alicja Gosiewska
A. Gacek
Piotr Lubon
P. Biecek
13
5
0
28 Feb 2019
Fairwashing: the risk of rationalization
Fairwashing: the risk of rationalization
Ulrich Aïvodji
Hiromi Arai
O. Fortineau
Sébastien Gambs
Satoshi Hara
Alain Tapp
FaML
19
142
0
28 Jan 2019
Interpretable machine learning: definitions, methods, and applications
Interpretable machine learning: definitions, methods, and applications
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin-Xia Yu
XAI
HAI
47
1,416
0
14 Jan 2019
1