ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXivPDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,242 papers shown
Title
Aligning Explanations with Human Communication
Aligning Explanations with Human Communication
Jacopo Teneggi
Zhenzhen Wang
Paul H. Yi
Tianmin Shu
Jeremias Sulam
18
0
0
21 May 2025
Explaining Unreliable Perception in Automated Driving: A Fuzzy-based Monitoring Approach
Explaining Unreliable Perception in Automated Driving: A Fuzzy-based Monitoring Approach
Aniket Salvi
Gereon Weiss
Mario Trapp
14
0
0
20 May 2025
Truth or Twist? Optimal Model Selection for Reliable Label Flipping Evaluation in LLM-based Counterfactuals
Truth or Twist? Optimal Model Selection for Reliable Label Flipping Evaluation in LLM-based Counterfactuals
Qianli Wang
Van Bach Nguyen
Nils Feldhus
Luis Felipe Villa-Arenas
Christin Seifert
Sebastian Möller
Vera Schmitt
7
0
0
20 May 2025
Through a Compressed Lens: Investigating the Impact of Quantization on LLM Explainability and Interpretability
Through a Compressed Lens: Investigating the Impact of Quantization on LLM Explainability and Interpretability
Qianli Wang
Mingyang Wang
Nils Feldhus
Simon Ostermann
Yuan Cao
Hinrich Schütze
Sebastian Möller
Vera Schmitt
MQ
14
0
0
20 May 2025
Information Science Principles of Machine Learning: A Causal Chain Meta-Framework Based on Formalized Information Mapping
Information Science Principles of Machine Learning: A Causal Chain Meta-Framework Based on Formalized Information Mapping
Jianfeng Xu
AI4CE
12
0
0
19 May 2025
Heart2Mind: Human-Centered Contestable Psychiatric Disorder Diagnosis System using Wearable ECG Monitors
Heart2Mind: Human-Centered Contestable Psychiatric Disorder Diagnosis System using Wearable ECG Monitors
Hung Nguyen
Alireza Rahimi
Veronica Whitford
Hélène Fournier
Irina Kondratova
René Richard
Hung Cao
32
0
0
16 May 2025
A Fast Kernel-based Conditional Independence test with Application to Causal Discovery
A Fast Kernel-based Conditional Independence test with Application to Causal Discovery
Oliver Schacht
Biwei Huang
24
0
0
16 May 2025
A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support
A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support
Felix Liedeker
Olivia Sanchez-Graillet
Moana Seidler
Christian Brandt
Jörg Wellmer
Philipp Cimiano
31
0
0
15 May 2025
Explaining Autonomous Vehicles with Intention-aware Policy Graphs
Explaining Autonomous Vehicles with Intention-aware Policy Graphs
Sara Montese
Victor Gimenez-Abalos
Atia Cortés
Ulises Cortés
Sergio Alvarez-Napagao
38
0
0
13 May 2025
Explainable Reinforcement Learning Agents Using World Models
Explainable Reinforcement Learning Agents Using World Models
Madhuri Singh
Amal Alabdulkarim
Gennie Mansi
Mark O. Riedl
28
0
0
12 May 2025
Interpretable Event Diagnosis in Water Distribution Networks
Interpretable Event Diagnosis in Water Distribution Networks
André Artelt
Stelios G. Vrachimis
Demetrios G. Eliades
Ulrike Kuhl
Barbara Hammer
Marios M. Polycarpou
36
0
0
12 May 2025
Realistic Counterfactual Explanations for Machine Learning-Controlled Mobile Robots using 2D LiDAR
Realistic Counterfactual Explanations for Machine Learning-Controlled Mobile Robots using 2D LiDAR
Sindre Benjamin Remman
A. Lekkas
29
0
0
11 May 2025
Integrating Explainable AI in Medical Devices: Technical, Clinical and Regulatory Insights and Recommendations
Integrating Explainable AI in Medical Devices: Technical, Clinical and Regulatory Insights and Recommendations
Dima Alattal
Asal Khoshravan Azar
P. Myles
Richard Branson
Hatim Abdulhussein
Allan Tucker
29
0
0
10 May 2025
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions
Somayeh Molaei
Lionel P. Robert
Nikola Banovic
31
0
0
09 May 2025
KERAIA: An Adaptive and Explainable Framework for Dynamic Knowledge Representation and Reasoning
KERAIA: An Adaptive and Explainable Framework for Dynamic Knowledge Representation and Reasoning
Stephen Richard Varey
A. D. Stefano
Anh Han
73
0
0
07 May 2025
Robustness questions the interpretability of graph neural networks: what to do?
Robustness questions the interpretability of graph neural networks: what to do?
Kirill Lukyanov
Georgii Sazonov
Serafim Boyarsky
Ilya Makarov
AAML
244
0
0
05 May 2025
A New Approach to Backtracking Counterfactual Explanations: A Unified Causal Framework for Efficient Model Interpretability
A New Approach to Backtracking Counterfactual Explanations: A Unified Causal Framework for Efficient Model Interpretability
Pouria Fatemi
Ehsan Sharifian
Mohammad Hossein Yassaee
43
0
0
05 May 2025
xEEGNet: Towards Explainable AI in EEG Dementia Classification
xEEGNet: Towards Explainable AI in EEG Dementia Classification
Andrea Zanola
Louis Fabrice Tshimanga
Federico Del Pup
Marco Baiesi
Manfredo Atzori
34
0
0
30 Apr 2025
Disjunctive and Conjunctive Normal Form Explanations of Clusters Using Auxiliary Information
Disjunctive and Conjunctive Normal Form Explanations of Clusters Using Auxiliary Information
Robert F. Downey
S. S. Ravi
36
0
0
29 Apr 2025
Mitigating Societal Cognitive Overload in the Age of AI: Challenges and Directions
Mitigating Societal Cognitive Overload in the Age of AI: Challenges and Directions
Salem Lahlou
62
0
0
28 Apr 2025
Enhancing Cell Counting through MLOps: A Structured Approach for Automated Cell Analysis
Enhancing Cell Counting through MLOps: A Structured Approach for Automated Cell Analysis
Matteo Testi
Luca Clissa
Matteo Ballabio
Salvatore Ricciardi
Federico Baldo
Emanuele Frontoni
S. Moccia
Gennario Vessio
74
0
0
28 Apr 2025
Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room
Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room
Danial Hooshyar
Gustav Šír
Yeongwook Yang
Eve Kikas
Raija Hamalainen
T. Karkkainen
Dragan Gašević
Roger Azevedo
42
0
0
22 Apr 2025
Do It For Me vs. Do It With Me: Investigating User Perceptions of Different Paradigms of Automation in Copilots for Feature-Rich Software
Do It For Me vs. Do It With Me: Investigating User Perceptions of Different Paradigms of Automation in Copilots for Feature-Rich Software
Anjali Khurana
Xiaotian Su
April Yi Wang
Parmit K. Chilana
45
1
0
22 Apr 2025
Causal DAG Summarization (Full Version)
Causal DAG Summarization (Full Version)
Anna Zeng
Michael Cafarella
Batya Kenig
Markos Markakis
Brit Youngmann
Babak Salimi
CML
45
1
0
21 Apr 2025
ScholarMate: A Mixed-Initiative Tool for Qualitative Knowledge Work and Information Sensemaking
ScholarMate: A Mixed-Initiative Tool for Qualitative Knowledge Work and Information Sensemaking
Runlong Ye
Patrick Yung Kang Lee
Matthew Varona
Oliver Huang
Carolina Nobre
43
0
0
19 Apr 2025
Probabilistic Stability Guarantees for Feature Attributions
Probabilistic Stability Guarantees for Feature Attributions
Helen Jin
Anton Xue
Weiqiu You
Surbhi Goel
Eric Wong
32
0
0
18 Apr 2025
AskQE: Question Answering as Automatic Evaluation for Machine Translation
AskQE: Question Answering as Automatic Evaluation for Machine Translation
Dayeon Ki
Kevin Duh
Marine Carpuat
36
1
0
15 Apr 2025
Revisiting the attacker's knowledge in inference attacks against Searchable Symmetric Encryption
Revisiting the attacker's knowledge in inference attacks against Searchable Symmetric Encryption
Marc Damie
Jean-Benoist Leger
Florian Hahn
Andreas Peter
AAML
48
1
0
14 Apr 2025
A Multi-Layered Research Framework for Human-Centered AI: Defining the Path to Explainability and Trust
A Multi-Layered Research Framework for Human-Centered AI: Defining the Path to Explainability and Trust
Chameera De Silva
Thilina Halloluwa
Dhaval Vyas
27
0
0
14 Apr 2025
GlyTwin: Digital Twin for Glucose Control in Type 1 Diabetes Through Optimal Behavioral Modifications Using Patient-Centric Counterfactuals
GlyTwin: Digital Twin for Glucose Control in Type 1 Diabetes Through Optimal Behavioral Modifications Using Patient-Centric Counterfactuals
Asiful Arefeen
Saman Khamesian
Maria Adela Grando
Bithika Thompson
Hassan Ghasemzadeh
51
0
0
14 Apr 2025
Towards an Evaluation Framework for Explainable Artificial Intelligence Systems for Health and Well-being
Towards an Evaluation Framework for Explainable Artificial Intelligence Systems for Health and Well-being
Esperança Amengual-Alcover
Antoni Jaume-i-Capó
Miquel Miró-Nicolau
Gabriel Moyà Alcover
Antonia Paniza-Fullana
42
0
0
11 Apr 2025
Exploring the Effectiveness and Interpretability of Texts in LLM-based Time Series Models
Exploring the Effectiveness and Interpretability of Texts in LLM-based Time Series Models
Zhengke Sun
Hangwei Qian
Ivor Tsang
AI4TS
36
0
0
09 Apr 2025
Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities
Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities
M. Domnich
Rasmus Moorits Veski
Julius Valja
Kadi Tulver
Raul Vicente
FAtt
45
0
0
07 Apr 2025
Improving Counterfactual Truthfulness for Molecular Property Prediction through Uncertainty Quantification
Improving Counterfactual Truthfulness for Molecular Property Prediction through Uncertainty Quantification
Jonas Teufel
Annika Leinweber
Pascal Friederich
57
0
0
03 Apr 2025
Rubrik's Cube: Testing a New Rubric for Evaluating Explanations on the CUBE dataset
Rubrik's Cube: Testing a New Rubric for Evaluating Explanations on the CUBE dataset
Diana Galván-Sosa
Gabrielle Gaudeau
Pride Kavumba
Yunmeng Li
Hongyi gu
Zheng Yuan
Keisuke Sakaguchi
P. Buttery
LRM
40
0
0
31 Mar 2025
Which LIME should I trust? Concepts, Challenges, and Solutions
Which LIME should I trust? Concepts, Challenges, and Solutions
Patrick Knab
Sascha Marton
Udo Schlegel
Christian Bartelt
FAtt
45
0
0
31 Mar 2025
Exploring Explainable Multi-player MCTS-minimax Hybrids in Board Game Using Process Mining
Exploring Explainable Multi-player MCTS-minimax Hybrids in Board Game Using Process Mining
Yiyu Qian
Tim Miller
Zheng Qian
Liyuan Zhao
44
0
0
30 Mar 2025
Interpretable Machine Learning in Physics: A Review
Interpretable Machine Learning in Physics: A Review
Sebastian Johann Wetzel
Seungwoong Ha
Raban Iten
Miriam Klopotek
Ziming Liu
AI4CE
80
0
0
30 Mar 2025
Ranking Counterfactual Explanations
Ranking Counterfactual Explanations
Suryani Lim
H. Prade
G. Richard
CML
71
0
0
20 Mar 2025
Disentangling Fine-Tuning from Pre-Training in Visual Captioning with Hybrid Markov Logic
Disentangling Fine-Tuning from Pre-Training in Visual Captioning with Hybrid Markov Logic
Monika Shah
Somdeb Sarkhel
Deepak Venugopal
MLLM
BDL
VLM
87
0
0
18 Mar 2025
Interpretable Transformation and Analysis of Timelines through Learning via Surprisability
O. Mokryn
Teddy Lazebnik
Hagit Ben-Shoshan
AI4TS
67
0
0
06 Mar 2025
Guiding LLMs to Generate High-Fidelity and High-Quality Counterfactual Explanations for Text Classification
Van Bach Nguyen
C. Seifert
Jorg Schlotterer
BDL
68
0
0
06 Mar 2025
Conceptual Contrastive Edits in Textual and Vision-Language Retrieval
Maria Lymperaiou
Giorgos Stamou
VLM
55
0
0
01 Mar 2025
Why Trust in AI May Be Inevitable
Why Trust in AI May Be Inevitable
Nghi Truong
Phanish Puranam
Ilia Testlin
41
0
0
28 Feb 2025
QPM: Discrete Optimization for Globally Interpretable Image Classification
QPM: Discrete Optimization for Globally Interpretable Image Classification
Thomas Norrenbrock
Timo Kaiser
Sovan Biswas
R. Manuvinakurike
Bodo Rosenhahn
69
0
0
27 Feb 2025
A Method for Evaluating the Interpretability of Machine Learning Models in Predicting Bond Default Risk Based on LIME and SHAP
A Method for Evaluating the Interpretability of Machine Learning Models in Predicting Bond Default Risk Based on LIME and SHAP
Yan Zhang
Lin Chen
Yixiang Tian
FAtt
67
0
0
26 Feb 2025
Less or More: Towards Glanceable Explanations for LLM Recommendations Using Ultra-Small Devices
Less or More: Towards Glanceable Explanations for LLM Recommendations Using Ultra-Small Devices
Xinru Wang
Mengjie Yu
Hannah Nguyen
Michael Iuzzolino
Tianyi Wang
...
Ting Zhang
Naveen Sendhilnathan
Hrvoje Benko
Haijun Xia
Tanya R. Jonker
58
0
0
26 Feb 2025
Can LLMs Explain Themselves Counterfactually?
Can LLMs Explain Themselves Counterfactually?
Zahra Dehghanighobadi
Asja Fischer
Muhammad Bilal Zafar
LRM
49
0
0
25 Feb 2025
All You Need for Counterfactual Explainability Is Principled and Reliable Estimate of Aleatoric and Epistemic Uncertainty
All You Need for Counterfactual Explainability Is Principled and Reliable Estimate of Aleatoric and Epistemic Uncertainty
Kacper Sokol
Eyke Hüllermeier
58
2
0
24 Feb 2025
Comparing zero-shot self-explanations with human rationales in text classification
Comparing zero-shot self-explanations with human rationales in text classification
Stephanie Brandl
Oliver Eberle
65
0
0
24 Feb 2025
1234...232425
Next