ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXivPDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,242 papers shown
Title
Visual Analytics for Fine-grained Text Classification Models and
  Datasets
Visual Analytics for Fine-grained Text Classification Models and Datasets
Munkhtulga Battogtokh
Y. Xing
Cosmin Davidescu
Alfie Abdul-Rahman
Michael Luck
Rita Borgo
36
0
0
21 Mar 2024
Dynamic Explanation Emphasis in Human-XAI Interaction with Communication
  Robot
Dynamic Explanation Emphasis in Human-XAI Interaction with Communication Robot
Yosuke Fukuchi
Seiji Yamada
40
0
0
21 Mar 2024
How Human-Centered Explainable AI Interface Are Designed and Evaluated:
  A Systematic Survey
How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey
Thu Nguyen
Alessandro Canossa
Jichen Zhu
35
3
0
21 Mar 2024
What Does Evaluation of Explainable Artificial Intelligence Actually
  Tell Us? A Case for Compositional and Contextual Validation of XAI Building
  Blocks
What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks
Kacper Sokol
Julia E. Vogt
47
11
0
19 Mar 2024
Advancing Explainable Autonomous Vehicle Systems: A Comprehensive Review
  and Research Roadmap
Advancing Explainable Autonomous Vehicle Systems: A Comprehensive Review and Research Roadmap
Sule Tekkesinoglu
Azra Habibovic
Lars Kunze
37
3
0
19 Mar 2024
Enhancing Trust in Autonomous Agents: An Architecture for Accountability
  and Explainability through Blockchain and Large Language Models
Enhancing Trust in Autonomous Agents: An Architecture for Accountability and Explainability through Blockchain and Large Language Models
Laura Fernández-Becerra
Miguel Ángel González Santamarta
Ángel Manuel Guerrero Higueras
Francisco J. Rodríguez-Lera
Vicente Matellán Olivera
41
0
0
14 Mar 2024
People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior: Insights from Cognitive Science for Explainable AI
People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior: Insights from Cognitive Science for Explainable AI
Balint Gyevnar
Stephanie Droop
Tadeg Quillien
Shay B. Cohen
Neil R. Bramley
Christopher G. Lucas
Stefano V. Albrecht
54
3
0
11 Mar 2024
WatChat: Explaining perplexing programs by debugging mental models
WatChat: Explaining perplexing programs by debugging mental models
Kartik Chandra
Tzu-Mao Li
Rachit Nigam
Joshua Tenenbaum
Jonathan Ragan-Kelley
LRM
19
4
0
08 Mar 2024
T-TAME: Trainable Attention Mechanism for Explaining Convolutional
  Networks and Vision Transformers
T-TAME: Trainable Attention Mechanism for Explaining Convolutional Networks and Vision Transformers
Mariano V. Ntrougkas
Nikolaos Gkalelis
Vasileios Mezaris
FAtt
ViT
33
5
0
07 Mar 2024
Explaining Genetic Programming Trees using Large Language Models
Explaining Genetic Programming Trees using Large Language Models
Paula Maddigan
Andrew Lensen
Bing Xue
AI4CE
42
5
0
06 Mar 2024
Even-Ifs From If-Onlys: Are the Best Semi-Factual Explanations Found
  Using Counterfactuals As Guides?
Even-Ifs From If-Onlys: Are the Best Semi-Factual Explanations Found Using Counterfactuals As Guides?
Saugat Aryal
Mark T. Keane
36
4
0
01 Mar 2024
Modeling the Quality of Dialogical Explanations
Modeling the Quality of Dialogical Explanations
Milad Alshomary
Felix Lange
Meisam Booshehri
Meghdut Sengupta
Philipp Cimiano
Henning Wachsmuth
51
2
0
01 Mar 2024
Axe the X in XAI: A Plea for Understandable AI
Axe the X in XAI: A Plea for Understandable AI
Andrés Páez
11
0
0
01 Mar 2024
User Characteristics in Explainable AI: The Rabbit Hole of
  Personalization?
User Characteristics in Explainable AI: The Rabbit Hole of Personalization?
Robert Nimmo
Marios Constantinides
Ke Zhou
Daniele Quercia
Simone Stumpf
23
11
0
29 Feb 2024
Evaluating Webcam-based Gaze Data as an Alternative for Human Rationale
  Annotations
Evaluating Webcam-based Gaze Data as an Alternative for Human Rationale Annotations
Stephanie Brandl
Oliver Eberle
Tiago F. R. Ribeiro
Anders Søgaard
Nora Hollenstein
40
1
0
29 Feb 2024
Cultural Bias in Explainable AI Research: A Systematic Analysis
Cultural Bias in Explainable AI Research: A Systematic Analysis
Uwe Peters
Mary Carman
23
11
0
28 Feb 2024
User Decision Guidance with Selective Explanation Presentation from
  Explainable-AI
User Decision Guidance with Selective Explanation Presentation from Explainable-AI
Yosuke Fukuchi
Seiji Yamada
61
3
0
28 Feb 2024
Understanding the Dataset Practitioners Behind Large Language Model
  Development
Understanding the Dataset Practitioners Behind Large Language Model Development
Crystal Qian
Emily Reif
Minsuk Kahng
47
3
0
21 Feb 2024
What is the focus of XAI in UI design? Prioritizing UI design principles
  for enhancing XAI user experience
What is the focus of XAI in UI design? Prioritizing UI design principles for enhancing XAI user experience
Dian Lei
Yao He
Jianyou Zeng
46
1
0
21 Feb 2024
SmartEx: A Framework for Generating User-Centric Explanations in Smart
  Environments
SmartEx: A Framework for Generating User-Centric Explanations in Smart Environments
Mersedeh Sadeghi
Lars Herbold
Max Unterbusch
Andreas Vogelsang
43
5
0
20 Feb 2024
Right on Time: Revising Time Series Models by Constraining their
  Explanations
Right on Time: Revising Time Series Models by Constraining their Explanations
Maurice Kraus
David Steinmann
Antonia Wüst
Andre Kokozinski
Kristian Kersting
AI4TS
42
4
0
20 Feb 2024
Properties and Challenges of LLM-Generated Explanations
Properties and Challenges of LLM-Generated Explanations
Jenny Kunz
Marco Kuhlmann
35
20
0
16 Feb 2024
Current and future roles of artificial intelligence in retinopathy of
  prematurity
Current and future roles of artificial intelligence in retinopathy of prematurity
Ali Jafarizadeh
Shadi Farabi Maleki
Parnia Pouya
Navid Sobhi
M. Abdollahi
...
Houshyar Asadi
R. Alizadehsani
Ruyan Tan
Sheikh Mohammad Shariful Islam
U. R. Acharya
AI4CE
34
6
0
15 Feb 2024
Explaining Probabilistic Models with Distributional Values
Explaining Probabilistic Models with Distributional Values
Luca Franceschi
Michele Donini
Cédric Archambeau
Matthias Seeger
FAtt
39
2
0
15 Feb 2024
Connecting Algorithmic Fairness to Quality Dimensions in Machine
  Learning in Official Statistics and Survey Production
Connecting Algorithmic Fairness to Quality Dimensions in Machine Learning in Official Statistics and Survey Production
Patrick Oliver Schenk
Christoph Kern
FaML
36
0
0
14 Feb 2024
TELLER: A Trustworthy Framework for Explainable, Generalizable and
  Controllable Fake News Detection
TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection
Hui Liu
Wenya Wang
Haoru Li
Haoliang Li
47
3
0
12 Feb 2024
One-for-many Counterfactual Explanations by Column Generation
One-for-many Counterfactual Explanations by Column Generation
Andrea Lodi
Jasone Ramírez-Ayerbe
LRM
29
2
0
12 Feb 2024
ACTER: Diverse and Actionable Counterfactual Sequences for Explaining
  and Diagnosing RL Policies
ACTER: Diverse and Actionable Counterfactual Sequences for Explaining and Diagnosing RL Policies
Jasmina Gajcin
Ivana Dusparic
CML
OffRL
35
2
0
09 Feb 2024
Scalable Interactive Machine Learning for Future Command and Control
Scalable Interactive Machine Learning for Future Command and Control
Anna Madison
Ellen R. Novoseller
Vinicius G. Goecks
Benjamin T. Files
Nicholas R. Waytowich
Alfred Yu
Vernon J. Lawhern
Steven Thurman
Christopher Kelshaw
Kaleb McDowell
35
4
0
09 Feb 2024
Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic
  Review
Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review
Anton Kuznietsov
Balint Gyevnar
Cheng Wang
Steven Peters
Stefano V. Albrecht
XAI
33
27
0
08 Feb 2024
Advancing Explainable AI Toward Human-Like Intelligence: Forging the
  Path to Artificial Brain
Advancing Explainable AI Toward Human-Like Intelligence: Forging the Path to Artificial Brain
Yongchen Zhou
Richard Jiang
26
3
0
07 Feb 2024
Explaining Learned Reward Functions with Counterfactual Trajectories
Explaining Learned Reward Functions with Counterfactual Trajectories
Jan Wehner
Frans Oliehoek
Luciano Cavalcante Siebert
37
0
0
07 Feb 2024
Collective Counterfactual Explanations via Optimal Transport
Collective Counterfactual Explanations via Optimal Transport
A. Ehyaei
Ali Shirali
Samira Samadi
OffRL
OT
28
1
0
07 Feb 2024
Leveraging Large Language Models for Hybrid Workplace Decision Support
Leveraging Large Language Models for Hybrid Workplace Decision Support
Yujin Kim
Chin-Chia Hsu
33
1
0
06 Feb 2024
SIDU-TXT: An XAI Algorithm for NLP with a Holistic Assessment Approach
SIDU-TXT: An XAI Algorithm for NLP with a Holistic Assessment Approach
M. N. Jahromi
Satya M. Muddamsetty
Asta Sofie Stage Jarlner
Anna Murphy Hogenhaug
Thomas Gammeltoft-Hansen
T. Moeslund
34
2
0
05 Feb 2024
InterpretCC: Intrinsic User-Centric Interpretability through Global
  Mixture of Experts
InterpretCC: Intrinsic User-Centric Interpretability through Global Mixture of Experts
Vinitra Swamy
Syrielle Montariol
Julian Blackwell
Jibril Frej
Martin Jaggi
Tanja Käser
51
3
0
05 Feb 2024
XAI-CF -- Examining the Role of Explainable Artificial Intelligence in
  Cyber Forensics
XAI-CF -- Examining the Role of Explainable Artificial Intelligence in Cyber Forensics
Shahid Alam
Zeynep Altıparmak
18
1
0
04 Feb 2024
What Will My Model Forget? Forecasting Forgotten Examples in Language
  Model Refinement
What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement
Xisen Jin
Xiang Ren
KELM
CLL
28
6
0
02 Feb 2024
EXMOS: Explanatory Model Steering Through Multifaceted Explanations and
  Data Configurations
EXMOS: Explanatory Model Steering Through Multifaceted Explanations and Data Configurations
Aditya Bhattacharya
Simone Stumpf
Lucija Gosak
Gregor Stiglic
K. Verbert
65
18
0
01 Feb 2024
Can we Constrain Concept Bottleneck Models to Learn Semantically
  Meaningful Input Features?
Can we Constrain Concept Bottleneck Models to Learn Semantically Meaningful Input Features?
Jack Furby
Daniel Cunnington
Dave Braines
Alun D. Preece
40
3
0
01 Feb 2024
Linguistically Communicating Uncertainty in Patient-Facing Risk
  Prediction Models
Linguistically Communicating Uncertainty in Patient-Facing Risk Prediction Models
Adarsa Sivaprasad
Ehud Reiter
39
0
0
31 Jan 2024
A Systematic Literature Review on Explainability for Machine/Deep Learning-based Software Engineering Research
A Systematic Literature Review on Explainability for Machine/Deep Learning-based Software Engineering Research
Sicong Cao
Xiaobing Sun
Ratnadira Widyasari
David Lo
Xiaoxue Wu
...
Jiale Zhang
Bin Li
Wei Liu
Di Wu
Yixin Chen
41
7
0
26 Jan 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
41
78
0
25 Jan 2024
Design, Development, and Deployment of Context-Adaptive AI Systems for
  Enhanced End-User Adoption
Design, Development, and Deployment of Context-Adaptive AI Systems for Enhanced End-User Adoption
Christine P. Lee
29
3
0
24 Jan 2024
Information That Matters: Exploring Information Needs of People Affected
  by Algorithmic Decisions
Information That Matters: Exploring Information Needs of People Affected by Algorithmic Decisions
Timothée Schmude
Laura M. Koesten
Torsten Moller
Sebastian Tschiatschek
33
3
0
24 Jan 2024
Visibility into AI Agents
Visibility into AI Agents
Alan Chan
Carson Ezell
Max Kaufmann
K. Wei
Lewis Hammond
...
Nitarshan Rajkumar
David M. Krueger
Noam Kolt
Lennart Heim
Markus Anderljung
25
33
0
23 Jan 2024
Graph Edits for Counterfactual Explanations: A comparative study
Graph Edits for Counterfactual Explanations: A comparative study
Angeliki Dimitriou
Nikolaos Chaidos
Maria Lymperaiou
Giorgos Stamou
BDL
38
0
0
21 Jan 2024
A comprehensive study on fidelity metrics for XAI
A comprehensive study on fidelity metrics for XAI
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
36
11
0
19 Jan 2024
Are self-explanations from Large Language Models faithful?
Are self-explanations from Large Language Models faithful?
Andreas Madsen
Sarath Chandar
Siva Reddy
LRM
35
25
0
15 Jan 2024
Explainable Predictive Maintenance: A Survey of Current Methods,
  Challenges and Opportunities
Explainable Predictive Maintenance: A Survey of Current Methods, Challenges and Opportunities
Logan Cummins
Alexander Sommers
Somayeh Bakhtiari Ramezani
Sudip Mittal
Joseph E. Jabour
Maria Seale
Shahram Rahimi
45
21
0
15 Jan 2024
Previous
123456...232425
Next