ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.10656
  4. Cited By
Abduction-Based Explanations for Machine Learning Models

Abduction-Based Explanations for Machine Learning Models

26 November 2018
Alexey Ignatiev
Nina Narodytska
Sasha Rubin
    FAtt
ArXivPDFHTML

Papers citing "Abduction-Based Explanations for Machine Learning Models"

45 / 45 papers shown
Title
A Fast Kernel-based Conditional Independence test with Application to Causal Discovery
A Fast Kernel-based Conditional Independence test with Application to Causal Discovery
Oliver Schacht
Biwei Huang
12
0
0
16 May 2025
Thoughts without Thinking: Reconsidering the Explanatory Value of Chain-of-Thought Reasoning in LLMs through Agentic Pipelines
Thoughts without Thinking: Reconsidering the Explanatory Value of Chain-of-Thought Reasoning in LLMs through Agentic Pipelines
R. Manuvinakurike
Emanuel Moss
E. A. Watkins
Saurav Sahay
G. Raffa
L. Nachman
LRM
31
0
0
01 May 2025
Axiomatic Characterisations of Sample-based Explainers
Axiomatic Characterisations of Sample-based Explainers
Leila Amgoud
Martin Cooper
Salim Debbaoui
FAtt
38
1
0
09 Aug 2024
Locally-Minimal Probabilistic Explanations
Locally-Minimal Probabilistic Explanations
Yacine Izza
Kuldeep S. Meel
Sasha Rubin
18
3
0
19 Dec 2023
Anytime Approximate Formal Feature Attribution
Anytime Approximate Formal Feature Attribution
Jinqiang Yu
Graham Farr
Alexey Ignatiev
Peter J. Stuckey
30
2
0
12 Dec 2023
Multiple Different Black Box Explanations for Image Classifiers
Multiple Different Black Box Explanations for Image Classifiers
Hana Chockler
D. A. Kelly
Daniel Kroening
FAtt
19
0
0
25 Sep 2023
When to Trust AI: Advances and Challenges for Certification of Neural
  Networks
When to Trust AI: Advances and Challenges for Certification of Neural Networks
M. Kwiatkowska
Xiyue Zhang
AAML
28
8
0
20 Sep 2023
BELLA: Black box model Explanations by Local Linear Approximations
BELLA: Black box model Explanations by Local Linear Approximations
N. Radulovic
Albert Bifet
Fabian M. Suchanek
FAtt
37
1
0
18 May 2023
Logic for Explainable AI
Logic for Explainable AI
Adnan Darwiche
32
8
0
09 May 2023
A New Class of Explanations for Classifiers with Non-Binary Features
A New Class of Explanations for Classifiers with Non-Binary Features
Chunxi Ji
Adnan Darwiche
FAtt
26
3
0
28 Apr 2023
Finding Minimum-Cost Explanations for Predictions made by Tree Ensembles
Finding Minimum-Cost Explanations for Predictions made by Tree Ensembles
John Törnblom
Emil Karlsson
Simin Nadjm-Tehrani
FAtt
51
0
0
16 Mar 2023
COMET: Neural Cost Model Explanation Framework
COMET: Neural Cost Model Explanation Framework
Isha Chaudhary
Alex Renda
Charith Mendis
Gagandeep Singh
21
2
0
14 Feb 2023
On the Complexity of Enumerating Prime Implicants from Decision-DNNF
  Circuits
On the Complexity of Enumerating Prime Implicants from Decision-DNNF Circuits
Alexis de Colnet
Pierre Marquis
18
9
0
30 Jan 2023
SpArX: Sparse Argumentative Explanations for Neural Networks [Technical
  Report]
SpArX: Sparse Argumentative Explanations for Neural Networks [Technical Report]
Hamed Ayoobi
Nico Potyka
Francesca Toni
24
17
0
23 Jan 2023
Robust Explanation Constraints for Neural Networks
Robust Explanation Constraints for Neural Networks
Matthew Wicker
Juyeon Heo
Luca Costabello
Adrian Weller
FAtt
29
18
0
16 Dec 2022
VeriX: Towards Verified Explainability of Deep Neural Networks
VeriX: Towards Verified Explainability of Deep Neural Networks
Min Wu
Haoze Wu
Clark W. Barrett
AAML
42
10
0
02 Dec 2022
Feature Necessity & Relevancy in ML Classifier Explanations
Feature Necessity & Relevancy in ML Classifier Explanations
Xuanxiang Huang
Martin C. Cooper
António Morgado
Jordi Planes
Sasha Rubin
FAtt
32
18
0
27 Oct 2022
Logic-Based Explainability in Machine Learning
Logic-Based Explainability in Machine Learning
Sasha Rubin
LRM
XAI
47
39
0
24 Oct 2022
Computing Abductive Explanations for Boosted Trees
Computing Abductive Explanations for Boosted Trees
Gilles Audemard
Jean-Marie Lagniez
Pierre Marquis
N. Szczepanski
26
12
0
16 Sep 2022
On Computing Relevant Features for Explaining NBCs
On Computing Relevant Features for Explaining NBCs
Yacine Izza
Sasha Rubin
33
5
0
11 Jul 2022
Eliminating The Impossible, Whatever Remains Must Be True
Eliminating The Impossible, Whatever Remains Must Be True
Jinqiang Yu
Alexey Ignatiev
Peter J. Stuckey
Nina Narodytska
Sasha Rubin
19
23
0
20 Jun 2022
Cardinality-Minimal Explanations for Monotonic Neural Networks
Cardinality-Minimal Explanations for Monotonic Neural Networks
Ouns El Harzli
Bernardo Cuenca Grau
Ian Horrocks
FAtt
38
5
0
19 May 2022
On the Computation of Necessary and Sufficient Explanations
On the Computation of Necessary and Sufficient Explanations
Adnan Darwiche
Chunxi Ji
FAtt
13
19
0
20 Mar 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified
  Perturbation Analysis
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
26
41
0
15 Feb 2022
Provably efficient, succinct, and precise explanations
Provably efficient, succinct, and precise explanations
Guy Blanc
Jane Lange
Li-Yang Tan
FAtt
34
35
0
01 Nov 2021
Foundations of Symbolic Languages for Model Interpretability
Foundations of Symbolic Languages for Model Interpretability
Marcelo Arenas
Daniel Baez
Pablo Barceló
Jorge A. Pérez
Bernardo Subercaseaux
ReLM
LRM
21
24
0
05 Oct 2021
On Quantifying Literals in Boolean Logic and Its Applications to
  Explainable AI
On Quantifying Literals in Boolean Logic and Its Applications to Explainable AI
Adnan Darwiche
Pierre Marquis
6
26
0
23 Aug 2021
On Efficiently Explaining Graph-Based Classifiers
On Efficiently Explaining Graph-Based Classifiers
Xuanxiang Huang
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
34
37
0
02 Jun 2021
Explanations for Monotonic Classifiers
Explanations for Monotonic Classifiers
Sasha Rubin
Thomas Gerspacher
M. Cooper
Alexey Ignatiev
Nina Narodytska
FAtt
8
43
0
01 Jun 2021
A unified logical framework for explanations in classifier systems
A unified logical framework for explanations in classifier systems
Xinghan Liu
E. Lorini
15
12
0
30 May 2021
Efficiently Explaining CSPs with Unsatisfiable Subset Optimization
Efficiently Explaining CSPs with Unsatisfiable Subset Optimization
Emilio Gamba
B. Bogaerts
Tias Guns
LRM
34
6
0
25 May 2021
Probabilistic Sufficient Explanations
Probabilistic Sufficient Explanations
Eric Wang
Pasha Khosravi
Mathias Niepert
XAI
FAtt
TPM
30
23
0
21 May 2021
SAT-Based Rigorous Explanations for Decision Lists
SAT-Based Rigorous Explanations for Decision Lists
Alexey Ignatiev
Sasha Rubin
XAI
25
44
0
14 May 2021
On Guaranteed Optimal Robust Explanations for NLP Models
On Guaranteed Optimal Robust Explanations for NLP Models
Emanuele La Malfa
A. Zbrzezny
Rhiannon Michelmore
Nicola Paoletti
Marta Z. Kwiatkowska
FAtt
11
46
0
08 May 2021
On the Computational Intelligibility of Boolean Classifiers
On the Computational Intelligibility of Boolean Classifiers
Gilles Audemard
S. Bellart
Louenas Bounia
F. Koriche
Jean-Marie Lagniez
Pierre Marquis
19
56
0
13 Apr 2021
Declarative Approaches to Counterfactual Explanations for Classification
Declarative Approaches to Counterfactual Explanations for Classification
Leopoldo Bertossi
37
17
0
15 Nov 2020
Abduction and Argumentation for Explainable Machine Learning: A Position
  Survey
Abduction and Argumentation for Explainable Machine Learning: A Position Survey
A. Kakas
Loizos Michael
9
11
0
24 Oct 2020
On Explaining Decision Trees
On Explaining Decision Trees
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
24
85
0
21 Oct 2020
Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time
  and Delay
Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay
Sasha Rubin
Thomas Gerspacher
Martin C. Cooper
Alexey Ignatiev
Nina Narodytska
FAtt
24
58
0
13 Aug 2020
xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI
  Systems
xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI Systems
Vineel Nagisetty
Laura Graves
Joseph Scott
Vijay Ganesh
GAN
DRL
18
27
0
24 Feb 2020
On The Reasons Behind Decisions
On The Reasons Behind Decisions
Adnan Darwiche
Auguste Hirth
FaML
17
145
0
21 Feb 2020
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Learning Certifiably Optimal Rule Lists for Categorical Data
Learning Certifiably Optimal Rule Lists for Categorical Data
E. Angelino
Nicholas Larus-Stone
Daniel Alabi
Margo Seltzer
Cynthia Rudin
60
195
0
06 Apr 2017
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
240
1,837
0
03 Feb 2017
Safety Verification of Deep Neural Networks
Safety Verification of Deep Neural Networks
Xiaowei Huang
M. Kwiatkowska
Sen Wang
Min Wu
AAML
180
932
0
21 Oct 2016
1