ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXivPDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,242 papers shown
Title
Directive Explanations for Monitoring the Risk of Diabetes Onset:
  Introducing Directive Data-Centric Explanations and Combinations to Support
  What-If Explorations
Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations
Aditya Bhattacharya
Jeroen Ooge
Gregor Stiglic
K. Verbert
16
30
0
21 Feb 2023
Designerly Understanding: Information Needs for Model Transparency to
  Support Design Ideation for AI-Powered User Experience
Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience
Q. V. Liao
Hariharan Subramonyam
Jennifer Wang
Jennifer Wortman Vaughan
HAI
40
59
0
21 Feb 2023
Less is More: The Influence of Pruning on the Explainability of CNNs
Less is More: The Influence of Pruning on the Explainability of CNNs
David Weber
F. Merkle
Pascal Schöttle
Stephan Schlögl
Martin Nocker
FAtt
39
1
0
17 Feb 2023
On the Impact of Explanations on Understanding of Algorithmic
  Decision-Making
On the Impact of Explanations on Understanding of Algorithmic Decision-Making
Timothée Schmude
Laura M. Koesten
Torsten Moller
Sebastian Tschiatschek
29
16
0
16 Feb 2023
Counterfactual Reasoning for Bias Evaluation and Detection in a Fairness
  under Unawareness setting
Counterfactual Reasoning for Bias Evaluation and Detection in a Fairness under Unawareness setting
Giandomenico Cornacchia
Vito Walter Anelli
Fedelucio Narducci
Azzurra Ragone
E. Sciascio
MLAU
FaML
32
1
0
16 Feb 2023
A novel approach to generate datasets with XAI ground truth to evaluate
  image models
A novel approach to generate datasets with XAI ground truth to evaluate image models
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
27
4
0
11 Feb 2023
Invisible Users: Uncovering End-Users' Requirements for Explainable AI
  via Explanation Forms and Goals
Invisible Users: Uncovering End-Users' Requirements for Explainable AI via Explanation Forms and Goals
Weina Jin
Jianyu Fan
D. Gromala
Philippe Pasquier
Ghassan Hamarneh
36
7
0
10 Feb 2023
Red Teaming Deep Neural Networks with Feature Synthesis Tools
Red Teaming Deep Neural Networks with Feature Synthesis Tools
Stephen Casper
Yuxiao Li
Jiawei Li
Tong Bu
Ke Zhang
K. Hariharan
Dylan Hadfield-Menell
AAML
42
15
0
08 Feb 2023
Mind the Gap! Bridging Explainable Artificial Intelligence and Human
  Understanding with Luhmann's Functional Theory of Communication
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication
B. Keenan
Kacper Sokol
41
7
0
07 Feb 2023
Understanding User Preferences in Explainable Artificial Intelligence: A
  Survey and a Mapping Function Proposal
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal
M. Hashemi
Ali Darejeh
Francisco Cruz
47
3
0
07 Feb 2023
Five policy uses of algorithmic transparency and explainability
Five policy uses of algorithmic transparency and explainability
Matthew R. O’Shaughnessy
59
0
0
06 Feb 2023
Hypothesis Testing and Machine Learning: Interpreting Variable Effects
  in Deep Artificial Neural Networks using Cohen's f2
Hypothesis Testing and Machine Learning: Interpreting Variable Effects in Deep Artificial Neural Networks using Cohen's f2
Wolfgang Messner
CML
32
12
0
02 Feb 2023
Charting the Sociotechnical Gap in Explainable AI: A Framework to
  Address the Gap in XAI
Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI
Upol Ehsan
Koustuv Saha
M. D. Choudhury
Mark O. Riedl
29
57
0
01 Feb 2023
On the Complexity of Enumerating Prime Implicants from Decision-DNNF
  Circuits
On the Complexity of Enumerating Prime Implicants from Decision-DNNF Circuits
Alexis de Colnet
Pierre Marquis
31
9
0
30 Jan 2023
Even if Explanations: Prior Work, Desiderata & Benchmarks for
  Semi-Factual XAI
Even if Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAI
Saugat Aryal
Markt. Keane
30
22
0
27 Jan 2023
Reflective Artificial Intelligence
Reflective Artificial Intelligence
P. R. Lewis
Stefan Sarkadi
32
17
0
25 Jan 2023
Explainable AI does not provide the explanations end-users are asking
  for
Explainable AI does not provide the explanations end-users are asking for
Savio Rozario
G. Cevora
XAI
22
0
0
25 Jan 2023
ASQ-IT: Interactive Explanations for Reinforcement-Learning Agents
ASQ-IT: Interactive Explanations for Reinforcement-Learning Agents
Yotam Amitai
Guy Avni
Ofra Amir
47
3
0
24 Jan 2023
Explainable Deep Reinforcement Learning: State of the Art and Challenges
Explainable Deep Reinforcement Learning: State of the Art and Challenges
G. Vouros
XAI
57
77
0
24 Jan 2023
Selective Explanations: Leveraging Human Input to Align Explainable AI
Selective Explanations: Leveraging Human Input to Align Explainable AI
Vivian Lai
Yiming Zhang
Chacha Chen
Q. V. Liao
Chenhao Tan
28
44
0
23 Jan 2023
Interpretability in Activation Space Analysis of Transformers: A Focused
  Survey
Interpretability in Activation Space Analysis of Transformers: A Focused Survey
Soniya Vijayakumar
AI4CE
40
3
0
22 Jan 2023
Rationalization for Explainable NLP: A Survey
Rationalization for Explainable NLP: A Survey
Sai Gurrapu
Ajay Kulkarni
Lifu Huang
Ismini Lourentzou
Laura J. Freeman
Feras A. Batarseh
41
32
0
21 Jan 2023
Exemplars and Counterexemplars Explanations for Image Classifiers,
  Targeting Skin Lesion Labeling
Exemplars and Counterexemplars Explanations for Image Classifiers, Targeting Skin Lesion Labeling
C. Metta
Riccardo Guidotti
Yuan Yin
Patrick Gallinari
S. Rinzivillo
MedIm
38
11
0
18 Jan 2023
Understanding the Role of Human Intuition on Reliance in Human-AI
  Decision-Making with Explanations
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Valerie Chen
Q. V. Liao
Jennifer Wortman Vaughan
Gagan Bansal
52
105
0
18 Jan 2023
Video Surveillance System Incorporating Expert Decision-making Process:
  A Case Study on Detecting Calving Signs in Cattle
Video Surveillance System Incorporating Expert Decision-making Process: A Case Study on Detecting Calving Signs in Cattle
Ryosuke Hyodo
Susumu Saito
Teppei Nakano
Makoto Akabane
Ryoichi Kasuga
Tetsuji Ogawa
17
1
0
10 Jan 2023
Language as a Latent Sequence: deep latent variable models for
  semi-supervised paraphrase generation
Language as a Latent Sequence: deep latent variable models for semi-supervised paraphrase generation
Jialin Yu
Alexandra I. Cristea
Anoushka Harit
Zhongtian Sun
O. Aduragba
Lei Shi
Noura Al Moubayed
VLM
BDL
DRL
22
3
0
05 Jan 2023
PEAK: Explainable Privacy Assistant through Automated Knowledge
  Extraction
PEAK: Explainable Privacy Assistant through Automated Knowledge Extraction
Gonul Ayci
Arzucan Özgür
Murat cSensoy
P. Yolum
47
1
0
05 Jan 2023
Mapping Knowledge Representations to Concepts: A Review and New
  Perspectives
Mapping Knowledge Representations to Concepts: A Review and New Perspectives
Lars Holmberg
P. Davidsson
Per Linde
42
1
0
31 Dec 2022
A Theoretical Framework for AI Models Explainability with Application in
  Biomedicine
A Theoretical Framework for AI Models Explainability with Application in Biomedicine
Matteo Rizzo
Alberto Veneri
A. Albarelli
Claudio Lucchese
Marco Nobile
Cristina Conati
XAI
38
8
0
29 Dec 2022
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Md. Rezaul Karim
Tanhim Islam
Oya Beyan
Christoph Lange
Michael Cochez
Dietrich-Rebholz Schuhmann
Stefan Decker
34
68
0
25 Dec 2022
Explanation Regeneration via Information Bottleneck
Explanation Regeneration via Information Bottleneck
Qintong Li
Zhiyong Wu
Lingpeng Kong
Wei Bi
40
3
0
19 Dec 2022
Counterfactual Explanations for Misclassified Images: How Human and
  Machine Explanations Differ
Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ
Eoin Delaney
A. Pakrashi
Derek Greene
Markt. Keane
40
16
0
16 Dec 2022
Interpretable ML for Imbalanced Data
Interpretable ML for Imbalanced Data
Damien Dablain
C. Bellinger
Bartosz Krawczyk
D. Aha
Nitesh Chawla
29
1
0
15 Dec 2022
Explanations Can Reduce Overreliance on AI Systems During
  Decision-Making
Explanations Can Reduce Overreliance on AI Systems During Decision-Making
Helena Vasconcelos
Matthew Jörke
Madeleine Grunde-McLaughlin
Tobias Gerstenberg
Michael S. Bernstein
Ranjay Krishna
34
165
0
13 Dec 2022
Improving Accuracy Without Losing Interpretability: A ML Approach for
  Time Series Forecasting
Improving Accuracy Without Losing Interpretability: A ML Approach for Time Series Forecasting
Yiqi Sun
Zheng Shi
Jianshen Zhang
Yongzhi Qi
Hao Hu
Zuo-jun Shen
AI4TS
21
0
0
13 Dec 2022
On Computing Probabilistic Abductive Explanations
On Computing Probabilistic Abductive Explanations
Yacine Izza
Xuanxiang Huang
Alexey Ignatiev
Nina Narodytska
Martin C. Cooper
Sasha Rubin
FAtt
XAI
18
17
0
12 Dec 2022
Towards a Learner-Centered Explainable AI: Lessons from the learning
  sciences
Towards a Learner-Centered Explainable AI: Lessons from the learning sciences
Anna Kawakami
Luke M. Guerdan
Yang Cheng
Anita Sun
Alison Hu
...
Nikos Arechiga
Matthew H. Lee
Scott A. Carter
Haiyi Zhu
Kenneth Holstein
34
10
0
11 Dec 2022
FAIR AI Models in High Energy Physics
FAIR AI Models in High Energy Physics
Javier Mauricio Duarte
Haoyang Li
Avik Roy
Ruike Zhu
Eliu A. Huerta
...
Mark S. Neubauer
Sang Eon Park
M. Quinnan
R. Rusack
Zhizhen Zhao
41
8
0
09 Dec 2022
Criteria for Classifying Forecasting Methods
Criteria for Classifying Forecasting Methods
Tim Januschowski
Jan Gasthaus
Bernie Wang
David Salinas
Valentin Flunkert
Michael Bohlke-Schneider
Laurent Callot
AI4TS
26
173
0
07 Dec 2022
Towards Better User Requirements: How to Involve Human Participants in
  XAI Research
Towards Better User Requirements: How to Involve Human Participants in XAI Research
Thu Nguyen
Jichen Zhu
29
3
0
06 Dec 2022
Relative Sparsity for Medical Decision Problems
Relative Sparsity for Medical Decision Problems
Samuel J. Weisenthal
Sally W. Thurston
Ashkan Ertefaie
30
2
0
29 Nov 2022
Mixture of Decision Trees for Interpretable Machine Learning
Mixture of Decision Trees for Interpretable Machine Learning
Simeon Brüggenjürgen
Nina Schaaf
P. Kerschke
Marco F. Huber
MoE
19
0
0
26 Nov 2022
Interpretability of an Interaction Network for identifying $H
  \rightarrow b\bar{b}$ jets
Interpretability of an Interaction Network for identifying H→bbˉH \rightarrow b\bar{b}H→bbˉ jets
Avik Roy
Mark S. Neubauer
35
3
0
23 Nov 2022
Algorithmic Decision-Making Safeguarded by Human Knowledge
Algorithmic Decision-Making Safeguarded by Human Knowledge
Ningyuan Chen
Mingya Hu
Wenhao Li
22
5
0
20 Nov 2022
Concept-based Explanations using Non-negative Concept Activation Vectors
  and Decision Tree for CNN Models
Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models
Gayda Mutahar
Tim Miller
FAtt
29
6
0
19 Nov 2022
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks
Stephen Casper
K. Hariharan
Dylan Hadfield-Menell
AAML
31
11
0
18 Nov 2022
Towards Explaining Subjective Ground of Individuals on Social Media
Towards Explaining Subjective Ground of Individuals on Social Media
Younghun Lee
Dan Goldwasser
36
1
0
18 Nov 2022
Explainability Via Causal Self-Talk
Explainability Via Causal Self-Talk
Nicholas A. Roy
Junkyung Kim
Neil C. Rabinowitz
CML
29
7
0
17 Nov 2022
Comparing Explanation Methods for Traditional Machine Learning Models
  Part 1: An Overview of Current Methods and Quantifying Their Disagreement
Comparing Explanation Methods for Traditional Machine Learning Models Part 1: An Overview of Current Methods and Quantifying Their Disagreement
Montgomery Flora
Corey K. Potvin
A. McGovern
Shawn Handler
FAtt
24
16
0
16 Nov 2022
(When) Are Contrastive Explanations of Reinforcement Learning Helpful?
(When) Are Contrastive Explanations of Reinforcement Learning Helpful?
Sanjana Narayanan
Isaac Lage
Finale Doshi-Velez
OffRL
17
1
0
14 Nov 2022
Previous
123...91011...232425
Next