ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXivPDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,242 papers shown
Title
Do Natural Language Explanations Represent Valid Logical Arguments?
  Verifying Entailment in Explainable NLI Gold Standards
Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards
Marco Valentino
Ian Pratt-Hartman
André Freitas
XAI
LRM
31
12
0
05 May 2021
Explanation-Based Human Debugging of NLP Models: A Survey
Explanation-Based Human Debugging of NLP Models: A Survey
Piyawat Lertvittayakumjorn
Francesca Toni
LRM
47
79
0
30 Apr 2021
Twin Systems for DeepCBR: A Menagerie of Deep Learning and Case-Based
  Reasoning Pairings for Explanation and Data Augmentation
Twin Systems for DeepCBR: A Menagerie of Deep Learning and Case-Based Reasoning Pairings for Explanation and Data Augmentation
Markt. Keane
Eoin M. Kenny
M. Temraz
Derek Greene
Barry Smyth
19
5
0
29 Apr 2021
From Human Explanation to Model Interpretability: A Framework Based on
  Weight of Evidence
From Human Explanation to Model Interpretability: A Framework Based on Weight of Evidence
David Alvarez-Melis
Harmanpreet Kaur
Hal Daumé
Hanna M. Wallach
Jennifer Wortman Vaughan
FAtt
56
28
0
27 Apr 2021
TrustyAI Explainability Toolkit
TrustyAI Explainability Toolkit
Rob Geada
Tommaso Teofili
Rui Vieira
Rebecca Whitworth
Daniele Zonca
16
2
0
26 Apr 2021
Axes for Sociotechnical Inquiry in AI Research
Axes for Sociotechnical Inquiry in AI Research
Sarah Dean
T. Gilbert
Nathan Lambert
Tom Zick
41
12
0
26 Apr 2021
Rich Semantics Improve Few-shot Learning
Rich Semantics Improve Few-shot Learning
Mohamed Afham
Salman Khan
Muhammad Haris Khan
Muzammal Naseer
Fahad Shahbaz Khan
VLM
37
24
0
26 Apr 2021
Exploiting Explanations for Model Inversion Attacks
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
34
83
0
26 Apr 2021
A Picture is Worth a Collaboration: Accumulating Design Knowledge for
  Computer-Vision-based Hybrid Intelligence Systems
A Picture is Worth a Collaboration: Accumulating Design Knowledge for Computer-Vision-based Hybrid Intelligence Systems
Patrick Zschech
J. Walk
Kai Heinrich
Michael Vossing
Niklas Kühl
23
4
0
23 Apr 2021
A Novel Interaction-based Methodology Towards Explainable AI with Better
  Understanding of Pneumonia Chest X-ray Images
A Novel Interaction-based Methodology Towards Explainable AI with Better Understanding of Pneumonia Chest X-ray Images
S. Lo
Yiqiao Yin
30
8
0
19 Apr 2021
Interpretability in deep learning for finance: a case study for the
  Heston model
Interpretability in deep learning for finance: a case study for the Heston model
D. Brigo
Xiaoshan Huang
A. Pallavicini
Haitz Sáez de Ocáriz Borde
FAtt
22
8
0
19 Apr 2021
GraphSVX: Shapley Value Explanations for Graph Neural Networks
GraphSVX: Shapley Value Explanations for Graph Neural Networks
Alexandre Duval
Fragkiskos D. Malliaros
FAtt
22
86
0
18 Apr 2021
Explaining Answers with Entailment Trees
Explaining Answers with Entailment Trees
Bhavana Dalvi
Peter Alexander Jansen
Oyvind Tafjord
Zhengnan Xie
Hannah Smith
Leighanna Pipatanangkura
Peter Clark
ReLM
FAtt
LRM
248
185
0
17 Apr 2021
LEx: A Framework for Operationalising Layers of Machine Learning
  Explanations
LEx: A Framework for Operationalising Layers of Machine Learning Explanations
Ronal Singh
Upol Ehsan
M. Cheong
Mark O. Riedl
Tim Miller
32
3
0
15 Apr 2021
NICE: An Algorithm for Nearest Instance Counterfactual Explanations
NICE: An Algorithm for Nearest Instance Counterfactual Explanations
Dieter Brughmans
Pieter Leyman
David Martens
40
64
0
15 Apr 2021
Machine learning and deep learning
Machine learning and deep learning
Christian Janiesch
Patrick Zschech
Kai Heinrich
26
1,170
0
12 Apr 2021
Enhancing Deep Neural Network Saliency Visualizations with Gradual
  Extrapolation
Enhancing Deep Neural Network Saliency Visualizations with Gradual Extrapolation
Tomasz Szandała
FAtt
24
4
0
11 Apr 2021
Connecting Attributions and QA Model Behavior on Realistic
  Counterfactuals
Connecting Attributions and QA Model Behavior on Realistic Counterfactuals
Xi Ye
Rohan Nair
Greg Durrett
24
24
0
09 Apr 2021
Individual Explanations in Machine Learning Models: A Survey for
  Practitioners
Individual Explanations in Machine Learning Models: A Survey for Practitioners
Alfredo Carrillo
Luis F. Cantú
Alejandro Noriega
FaML
24
15
0
09 Apr 2021
Question-Driven Design Process for Explainable AI User Experiences
Question-Driven Design Process for Explainable AI User Experiences
Q. V. Liao
Milena Pribić
Jaesik Han
Sarah Miller
Daby M. Sow
20
52
0
08 Apr 2021
Towards a Rigorous Evaluation of Explainability for Multivariate Time
  Series
Towards a Rigorous Evaluation of Explainability for Multivariate Time Series
Rohit Saluja
A. Malhi
Samanta Knapic
Kary Främling
C. Cavdar
XAI
AI4TS
23
19
0
06 Apr 2021
Measuring Linguistic Diversity During COVID-19
Measuring Linguistic Diversity During COVID-19
Artaches Ambartsoumian
F. Popowich
Benjamin Adams
19
35
0
03 Apr 2021
Reconciling the Discrete-Continuous Divide: Towards a Mathematical
  Theory of Sparse Communication
Reconciling the Discrete-Continuous Divide: Towards a Mathematical Theory of Sparse Communication
André F. T. Martins
35
1
0
01 Apr 2021
Modeling Users and Online Communities for Abuse Detection: A Position on
  Ethics and Explainability
Modeling Users and Online Communities for Abuse Detection: A Position on Ethics and Explainability
Pushkar Mishra
H. Yannakoudakis
Ekaterina Shutova
19
7
0
31 Mar 2021
Anomaly-Based Intrusion Detection by Machine Learning: A Case Study on
  Probing Attacks to an Institutional Network
Anomaly-Based Intrusion Detection by Machine Learning: A Case Study on Probing Attacks to an Institutional Network
E. Tufan
C. Tezcan
Cengiz Acartürk
26
29
0
31 Mar 2021
Contrastive Explanations of Plans Through Model Restrictions
Contrastive Explanations of Plans Through Model Restrictions
Benjamin Krarup
Senka Krivic
Daniele Magazzeni
D. Long
Michael Cashmore
David E. Smith
27
32
0
29 Mar 2021
Situated Case Studies for a Human-Centered Design of Explanation User
  Interfaces
Situated Case Studies for a Human-Centered Design of Explanation User Interfaces
Claudia Muller-Birn
Katrin Glinka
Peter Sorries
Michael Tebbe
S. Michl
19
2
0
29 Mar 2021
Local Explanations via Necessity and Sufficiency: Unifying Theory and
  Practice
Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice
David S. Watson
Limor Gultchin
Ankur Taly
Luciano Floridi
22
63
0
27 Mar 2021
Generating and Evaluating Explanations of Attended and Error-Inducing
  Input Regions for VQA Models
Generating and Evaluating Explanations of Attended and Error-Inducing Input Regions for VQA Models
Arijit Ray
Michael Cogswell
Xiaoyu Lin
Kamran Alipour
Ajay Divakaran
Yi Yao
Giedrius Burachas
FAtt
13
4
0
26 Mar 2021
Towards interpretability of Mixtures of Hidden Markov Models
Towards interpretability of Mixtures of Hidden Markov Models
Negar Safinianaini
Henrik Bostrom
11
2
0
23 Mar 2021
Explaining Black-Box Algorithms Using Probabilistic Contrastive
  Counterfactuals
Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals
Sainyam Galhotra
Romila Pradhan
Babak Salimi
CML
30
105
0
22 Mar 2021
Interpreting Deep Learning Models with Marginal Attribution by
  Conditioning on Quantiles
Interpreting Deep Learning Models with Marginal Attribution by Conditioning on Quantiles
M. Merz
Ronald Richman
A. Tsanakas
M. Wüthrich
FAtt
17
11
0
22 Mar 2021
Trustworthy Transparency by Design
Trustworthy Transparency by Design
Valentin Zieglmeier
A. Pretschner
18
16
0
19 Mar 2021
Interpretable Deep Learning: Interpretation, Interpretability,
  Trustworthiness, and Beyond
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond
Xuhong Li
Haoyi Xiong
Xingjian Li
Xuanyu Wu
Xiao Zhang
Ji Liu
Jiang Bian
Dejing Dou
AAML
FaML
XAI
HAI
28
318
0
19 Mar 2021
XProtoNet: Diagnosis in Chest Radiography with Global and Local
  Explanations
XProtoNet: Diagnosis in Chest Radiography with Global and Local Explanations
Eunji Kim
Siwon Kim
Minji Seo
Sungroh Yoon
ViT
FAtt
24
113
0
19 Mar 2021
Integrated Decision and Control: Towards Interpretable and
  Computationally Efficient Driving Intelligence
Integrated Decision and Control: Towards Interpretable and Computationally Efficient Driving Intelligence
Yang Guan
Yangang Ren
Qi Sun
Shengbo Eben Li
Haitong Ma
Jingliang Duan
Yifan Dai
B. Cheng
18
66
0
18 Mar 2021
Interpretability of a Deep Learning Model in the Application of Cardiac
  MRI Segmentation with an ACDC Challenge Dataset
Interpretability of a Deep Learning Model in the Application of Cardiac MRI Segmentation with an ACDC Challenge Dataset
Adrianna Janik
J. Dodd
Georgiana Ifrim
Kris Sankaran
Kathleen M. Curran
33
26
0
15 Mar 2021
A Study of Automatic Metrics for the Evaluation of Natural Language
  Explanations
A Study of Automatic Metrics for the Evaluation of Natural Language Explanations
Miruna Clinciu
Arash Eshghi
H. Hastie
56
54
0
15 Mar 2021
Explanations in Autonomous Driving: A Survey
Explanations in Autonomous Driving: A Survey
Daniel Omeiza
Helena Webb
Marina Jirotka
Lars Kunze
11
214
0
09 Mar 2021
A Comparative Approach to Explainable Artificial Intelligence Methods in
  Application to High-Dimensional Electronic Health Records: Examining the
  Usability of XAI
A Comparative Approach to Explainable Artificial Intelligence Methods in Application to High-Dimensional Electronic Health Records: Examining the Usability of XAI
J. Duell
17
2
0
08 Mar 2021
Counterfactuals and Causability in Explainable Artificial Intelligence:
  Theory, Algorithms, and Applications
Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications
Yu-Liang Chou
Catarina Moreira
P. Bruza
Chun Ouyang
Joaquim A. Jorge
CML
52
176
0
07 Mar 2021
Learning to Predict with Supporting Evidence: Applications to Clinical
  Risk Prediction
Learning to Predict with Supporting Evidence: Applications to Clinical Risk Prediction
Aniruddh Raghu
John Guttag
K. Young
E. Pomerantsev
Adrian Dalca
Collin M. Stultz
13
9
0
04 Mar 2021
Contrastive Explanations for Model Interpretability
Contrastive Explanations for Model Interpretability
Alon Jacovi
Swabha Swayamdipta
Shauli Ravfogel
Yanai Elazar
Yejin Choi
Yoav Goldberg
49
95
0
02 Mar 2021
Interpretable Artificial Intelligence through the Lens of Feature
  Interaction
Interpretable Artificial Intelligence through the Lens of Feature Interaction
Michael Tsang
James Enouen
Yan Liu
FAtt
19
7
0
01 Mar 2021
Reasons, Values, Stakeholders: A Philosophical Framework for Explainable
  Artificial Intelligence
Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence
Atoosa Kasirzadeh
40
24
0
01 Mar 2021
If Only We Had Better Counterfactual Explanations: Five Key Deficits to
  Rectify in the Evaluation of Counterfactual XAI Techniques
If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques
Mark T. Keane
Eoin M. Kenny
Eoin Delaney
Barry Smyth
CML
32
146
0
26 Feb 2021
Benchmarking and Survey of Explanation Methods for Black Box Models
Benchmarking and Survey of Explanation Methods for Black Box Models
F. Bodria
F. Giannotti
Riccardo Guidotti
Francesca Naretto
D. Pedreschi
S. Rinzivillo
XAI
38
221
0
25 Feb 2021
A Local Method for Identifying Causal Relations under Markov Equivalence
A Local Method for Identifying Causal Relations under Markov Equivalence
Zhuangyan Fang
Yue Liu
Z. Geng
Shengyu Zhu
Yangbo He
CML
30
13
0
25 Feb 2021
Teach Me to Explain: A Review of Datasets for Explainable Natural
  Language Processing
Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing
Sarah Wiegreffe
Ana Marasović
XAI
24
141
0
24 Feb 2021
Artificial Intelligence as an Anti-Corruption Tool (AI-ACT) --
  Potentials and Pitfalls for Top-down and Bottom-up Approaches
Artificial Intelligence as an Anti-Corruption Tool (AI-ACT) -- Potentials and Pitfalls for Top-down and Bottom-up Approaches
N. Köbis
C. Starke
Iyad Rahwan
14
11
0
23 Feb 2021
Previous
123...171819...232425
Next