ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXivPDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,242 papers shown
Title
Interpret-able feedback for AutoML systems
Interpret-able feedback for AutoML systems
Behnaz Arzani
Kevin Hsieh
Haoxian Chen
21
3
0
22 Feb 2021
Believe The HiPe: Hierarchical Perturbation for Fast, Robust, and
  Model-Agnostic Saliency Mapping
Believe The HiPe: Hierarchical Perturbation for Fast, Robust, and Model-Agnostic Saliency Mapping
Jessica Cooper
Ognjen Arandjelovic
David J. Harrison
AAML
14
13
0
22 Feb 2021
Intuitively Assessing ML Model Reliability through Example-Based
  Explanations and Editing Model Inputs
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
45
25
0
17 Feb 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
68
415
0
15 Feb 2021
The human-AI relationship in decision-making: AI explanation to support
  people on justifying their decisions
The human-AI relationship in decision-making: AI explanation to support people on justifying their decisions
J. Ferreira
Mateus de Souza Monteiro
13
23
0
10 Feb 2021
Principles of Explanation in Human-AI Systems
Principles of Explanation in Human-AI Systems
Shane T. Mueller
Elizabeth S. Veinott
R. Hoffman
Gary Klein
Lamia Alam
T. Mamun
W. Clancey
XAI
11
57
0
09 Feb 2021
Mitigating belief projection in explainable artificial intelligence via
  Bayesian Teaching
Mitigating belief projection in explainable artificial intelligence via Bayesian Teaching
Scott Cheng-Hsin Yang
Wai Keen Vong
Ravi B. Sojitra
Tomas Folke
Patrick Shafto
24
42
0
07 Feb 2021
Bandits for Learning to Explain from Explanations
Bandits for Learning to Explain from Explanations
Freya Behrens
Stefano Teso
Davide Mottin
FAtt
11
1
0
07 Feb 2021
CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
Ana Lucic
Maartje ter Hoeve
Gabriele Tolomei
Maarten de Rijke
Fabrizio Silvestri
120
143
0
05 Feb 2021
"I Don't Think So": Summarizing Policy Disagreements for Agent
  Comparison
"I Don't Think So": Summarizing Policy Disagreements for Agent Comparison
Yotam Amitai
Ofra Amir
LLMAG
27
12
0
05 Feb 2021
AI Development for the Public Interest: From Abstraction Traps to
  Sociotechnical Risks
AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks
Mckane Andrus
Sarah Dean
T. Gilbert
Nathan Lambert
Tom Zick
22
6
0
04 Feb 2021
EUCA: the End-User-Centered Explainable AI Framework
EUCA: the End-User-Centered Explainable AI Framework
Weina Jin
Jianyu Fan
D. Gromala
Philippe Pasquier
Ghassan Hamarneh
42
24
0
04 Feb 2021
When Can Models Learn From Explanations? A Formal Framework for
  Understanding the Roles of Explanation Data
When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data
Peter Hase
Joey Tianyi Zhou
XAI
30
87
0
03 Feb 2021
Directive Explanations for Actionable Explainability in Machine Learning
  Applications
Directive Explanations for Actionable Explainability in Machine Learning Applications
Ronal Singh
Paul Dourish
Piers Howe
Tim Miller
L. Sonenberg
Eduardo Velloso
F. Vetere
16
32
0
03 Feb 2021
Evaluating the Interpretability of Generative Models by Interactive
  Reconstruction
Evaluating the Interpretability of Generative Models by Interactive Reconstruction
A. Ross
Nina Chen
Elisa Zhao Hang
Elena L. Glassman
Finale Doshi-Velez
105
49
0
02 Feb 2021
Designing AI for Trust and Collaboration in Time-Constrained Medical
  Decisions: A Sociotechnical Lens
Designing AI for Trust and Collaboration in Time-Constrained Medical Decisions: A Sociotechnical Lens
Maia L. Jacobs
Jeffrey He
Melanie F. Pradier
Barbara D. Lam
Andrew C Ahn
T. McCoy
R. Perlis
Finale Doshi-Velez
Krzysztof Z. Gajos
54
145
0
01 Feb 2021
Counterfactual State Explanations for Reinforcement Learning Agents via
  Generative Deep Learning
Counterfactual State Explanations for Reinforcement Learning Agents via Generative Deep Learning
Matthew Lyle Olson
Roli Khanna
Lawrence Neal
Fuxin Li
Weng-Keen Wong
CML
40
69
0
29 Jan 2021
Explaining Natural Language Processing Classifiers with Occlusion and
  Language Modeling
Explaining Natural Language Processing Classifiers with Occlusion and Language Modeling
David Harbecke
AAML
27
2
0
28 Jan 2021
Cognitive Perspectives on Context-based Decisions and Explanations
Cognitive Perspectives on Context-based Decisions and Explanations
Marcus Westberg
Kary Främling
14
1
0
25 Jan 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders
  of Interpretable Machine Learning and their Needs
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
34
126
0
24 Jan 2021
Show or Suppress? Managing Input Uncertainty in Machine Learning Model
  Explanations
Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations
Danding Wang
Wencan Zhang
Brian Y. Lim
FAtt
27
22
0
23 Jan 2021
Explainable Artificial Intelligence Approaches: A Survey
Explainable Artificial Intelligence Approaches: A Survey
Sheikh Rabiul Islam
W. Eberle
S. Ghafoor
Mohiuddin Ahmed
XAI
46
103
0
23 Jan 2021
A Few Good Counterfactuals: Generating Interpretable, Plausible and
  Diverse Counterfactual Explanations
A Few Good Counterfactuals: Generating Interpretable, Plausible and Diverse Counterfactual Explanations
Barry Smyth
Mark T. Keane
CML
46
26
0
22 Jan 2021
GLocalX -- From Local to Global Explanations of Black Box AI Models
GLocalX -- From Local to Global Explanations of Black Box AI Models
Mattia Setzu
Riccardo Guidotti
A. Monreale
Franco Turini
D. Pedreschi
F. Giannotti
19
116
0
19 Jan 2021
Understanding the Effect of Out-of-distribution Examples and Interactive
  Explanations on Human-AI Decision Making
Understanding the Effect of Out-of-distribution Examples and Interactive Explanations on Human-AI Decision Making
Han Liu
Vivian Lai
Chenhao Tan
33
117
0
13 Jan 2021
Expanding Explainability: Towards Social Transparency in AI systems
Expanding Explainability: Towards Social Transparency in AI systems
Upol Ehsan
Q. V. Liao
Michael J. Muller
Mark O. Riedl
Justin D. Weisz
43
394
0
12 Jan 2021
Machine Learning Uncertainty as a Design Material: A
  Post-Phenomenological Inquiry
Machine Learning Uncertainty as a Design Material: A Post-Phenomenological Inquiry
J. Benjamin
Arne Berger
Nick Merrill
James Pierce
51
91
0
11 Jan 2021
Argument Schemes and Dialogue for Explainable Planning
Argument Schemes and Dialogue for Explainable Planning
Quratul-ain Mahesar
Simon Parsons
23
2
0
07 Jan 2021
How Much Automation Does a Data Scientist Want?
How Much Automation Does a Data Scientist Want?
Dakuo Wang
Q. V. Liao
Yunfeng Zhang
Udayan Khurana
Horst Samulowitz
Soya Park
Michael J. Muller
Lisa Amini
AI4CE
42
55
0
07 Jan 2021
Predicting Illness for a Sustainable Dairy Agriculture: Predicting and
  Explaining the Onset of Mastitis in Dairy Cows
Predicting Illness for a Sustainable Dairy Agriculture: Predicting and Explaining the Onset of Mastitis in Dairy Cows
C. Ryan
Christophe Gúeret
D. Berry
Medb Corcoran
Mark T. Keane
Brian Mac Namee
32
6
0
06 Jan 2021
One-shot Policy Elicitation via Semantic Reward Manipulation
One-shot Policy Elicitation via Semantic Reward Manipulation
Aaquib Tabrez
Ryan Leonard
Bradley Hayes
21
2
0
06 Jan 2021
Outcome-Explorer: A Causality Guided Interactive Visual Interface for
  Interpretable Algorithmic Decision Making
Outcome-Explorer: A Causality Guided Interactive Visual Interface for Interpretable Algorithmic Decision Making
Md. Naimul Hoque
Klaus Mueller
CML
59
30
0
03 Jan 2021
Modeling Disclosive Transparency in NLP Application Descriptions
Modeling Disclosive Transparency in NLP Application Descriptions
Michael Stephen Saxon
Sharon Levy
Xinyi Wang
Alon Albalak
Wenjie Wang
27
7
0
02 Jan 2021
Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and
  Improving Models
Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models
Tongshuang Wu
Marco Tulio Ribeiro
Jeffrey Heer
Daniel S. Weld
60
244
0
01 Jan 2021
Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA
Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA
Ana Valeria González
Gagan Bansal
Angela Fan
Robin Jia
Yashar Mehdad
Srini Iyer
AAML
42
24
0
30 Dec 2020
dalex: Responsible Machine Learning with Interactive Explainability and
  Fairness in Python
dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python
Hubert Baniecki
Wojciech Kretowicz
Piotr Piątyszek
J. Wiśniewski
P. Biecek
FaML
34
95
0
28 Dec 2020
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Alexis Ross
Ana Marasović
Matthew E. Peters
43
121
0
27 Dec 2020
Brain-inspired Search Engine Assistant based on Knowledge Graph
Brain-inspired Search Engine Assistant based on Knowledge Graph
Xuejiao Zhao
Huanhuan Chen
Zhenchang Xing
Chunyan Miao
22
31
0
25 Dec 2020
GANterfactual - Counterfactual Explanations for Medical Non-Experts
  using Generative Adversarial Learning
GANterfactual - Counterfactual Explanations for Medical Non-Experts using Generative Adversarial Learning
Silvan Mertes
Tobias Huber
Katharina Weitz
Alexander Heimerl
Elisabeth André
GAN
AAML
MedIm
39
69
0
22 Dec 2020
Unbox the Blackbox: Predict and Interpret YouTube Viewership Using Deep
  Learning
Unbox the Blackbox: Predict and Interpret YouTube Viewership Using Deep Learning
Jiaheng Xie
Xinyu Liu
HAI
33
10
0
21 Dec 2020
On Relating 'Why?' and 'Why Not?' Explanations
On Relating 'Why?' and 'Why Not?' Explanations
Alexey Ignatiev
Nina Narodytska
Nicholas M. Asher
Sasha Rubin
XAI
FAtt
LRM
28
26
0
21 Dec 2020
Semantics and explanation: why counterfactual explanations produce
  adversarial examples in deep neural networks
Semantics and explanation: why counterfactual explanations produce adversarial examples in deep neural networks
Kieran Browne
Ben Swift
AAML
GAN
33
29
0
18 Dec 2020
XAI-P-T: A Brief Review of Explainable Artificial Intelligence from
  Practice to Theory
XAI-P-T: A Brief Review of Explainable Artificial Intelligence from Practice to Theory
Nazanin Fouladgar
Kary Främling
XAI
15
4
0
17 Dec 2020
On Exploiting Hitting Sets for Model Reconciliation
On Exploiting Hitting Sets for Model Reconciliation
Stylianos Loukas Vasileiou
Alessandro Previti
William Yeoh
19
26
0
16 Dec 2020
Explanation from Specification
Explanation from Specification
Harish Naik
Gyorgy Turán
XAI
27
0
0
13 Dec 2020
The Three Ghosts of Medical AI: Can the Black-Box Present Deliver?
The Three Ghosts of Medical AI: Can the Black-Box Present Deliver?
Thomas P. Quinn
Stephan Jacobs
M. Senadeera
Vuong Le
S. Coghlan
33
112
0
10 Dec 2020
CommPOOL: An Interpretable Graph Pooling Framework for Hierarchical
  Graph Representation Learning
CommPOOL: An Interpretable Graph Pooling Framework for Hierarchical Graph Representation Learning
Haoteng Tang
Guixiang Ma
Lifang He
Heng-Chiao Huang
Liang Zhan
GNN
40
24
0
10 Dec 2020
Influence-Driven Explanations for Bayesian Network Classifiers
Influence-Driven Explanations for Bayesian Network Classifiers
Antonio Rago
Emanuele Albini
P. Baroni
Francesca Toni
20
9
0
10 Dec 2020
Deep Argumentative Explanations
Deep Argumentative Explanations
Emanuele Albini
Piyawat Lertvittayakumjorn
Antonio Rago
Francesca Toni
AAML
29
4
0
10 Dec 2020
Debiased-CAM to mitigate image perturbations with faithful visual
  explanations of machine learning
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
34
18
0
10 Dec 2020
Previous
123...181920...232425
Next