ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXivPDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,242 papers shown
Title
The AI-DEC: A Card-based Design Method for User-centered AI Explanations
The AI-DEC: A Card-based Design Method for User-centered AI Explanations
Christine P. Lee
M. Lee
Bilge Mutlu
HAI
43
4
0
26 May 2024
Navigating AI Fallibility: Examining People's Reactions and Perceptions
  of AI after Encountering Personality Misrepresentations
Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations
Qiaosi Wang
Chidimma L. Anyi
V. D. Swain
Ashok K. Goel
31
0
0
25 May 2024
Reassessing Evaluation Functions in Algorithmic Recourse: An Empirical
  Study from a Human-Centered Perspective
Reassessing Evaluation Functions in Algorithmic Recourse: An Empirical Study from a Human-Centered Perspective
T. Tominaga
Naomi Yamashita
Takeshi Kurashima
28
1
0
23 May 2024
Why do explanations fail? A typology and discussion on failures in XAI
Why do explanations fail? A typology and discussion on failures in XAI
Clara Bove
Thibault Laugel
Marie-Jeanne Lesot
C. Tijus
Marcin Detyniecki
35
2
0
22 May 2024
Explainable offline automatic signature verifier to support forensic
  handwriting examiners
Explainable offline automatic signature verifier to support forensic handwriting examiners
Moisés Díaz
M. A. Ferrer-Ballester
G. Vessio
44
3
0
21 May 2024
A Multi-Modal Explainability Approach for Human-Aware Robots in Multi-Party Conversation
A Multi-Modal Explainability Approach for Human-Aware Robots in Multi-Party Conversation
Iveta Becková
Stefan Pócos
Giulia Belgiovine
Marco Matarese
A. Sciutti
Carlo Mazzola
Carlo Mazzola
42
0
0
20 May 2024
Mitigating Text Toxicity with Counterfactual Generation
Mitigating Text Toxicity with Counterfactual Generation
Milan Bhan
Jean-Noel Vittaut
Nina Achache
Victor Legrand
Nicolas Chesneau
A. Blangero
Juliette Murris
Marie-Jeanne Lesot
MedIm
40
0
0
16 May 2024
When factorization meets argumentation: towards argumentative
  explanations
When factorization meets argumentation: towards argumentative explanations
Jinfeng Zhong
E. Negre
FAtt
31
0
0
13 May 2024
Design Requirements for Human-Centered Graph Neural Network Explanations
Design Requirements for Human-Centered Graph Neural Network Explanations
Pantea Habibi
Peyman Baghershahi
Sourav Medya
Debaleena Chattopadhyay
35
1
0
11 May 2024
To Trust or Not to Trust: Towards a novel approach to measure trust for
  XAI systems
To Trust or Not to Trust: Towards a novel approach to measure trust for XAI systems
Miquel Miró-Nicolau
Gabriel Moyà Alcover
Antoni Jaume-i-Capó
Manuel González Hidalgo
Maria Gemma Sempere Campello
Juan Antonio Palmer Sancho
28
0
0
09 May 2024
Relevant Irrelevance: Generating Alterfactual Explanations for Image
  Classifiers
Relevant Irrelevance: Generating Alterfactual Explanations for Image Classifiers
Silvan Mertes
Tobias Huber
Christina Karle
Katharina Weitz
Ruben Schlagowski
Cristina Conati
Elisabeth André
28
0
0
08 May 2024
ACORN: Aspect-wise Commonsense Reasoning Explanation Evaluation
ACORN: Aspect-wise Commonsense Reasoning Explanation Evaluation
Ana Brassard
Benjamin Heinzerling
Keito Kudo
Keisuke Sakaguchi
Kentaro Inui
LRM
41
0
0
08 May 2024
Large Language Models Cannot Explain Themselves
Large Language Models Cannot Explain Themselves
Advait Sarkar
LRM
48
7
0
07 May 2024
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and
  Beyond: A Survey
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A Survey
Rokas Gipiškis
Chun-Wei Tsai
Olga Kurasova
68
5
0
02 May 2024
Statistics and explainability: a fruitful alliance
Statistics and explainability: a fruitful alliance
Valentina Ghidini
29
0
0
30 Apr 2024
ViTHSD: Exploiting Hatred by Targets for Hate Speech Detection on Vietnamese Social Media Texts
ViTHSD: Exploiting Hatred by Targets for Hate Speech Detection on Vietnamese Social Media Texts
Cuong Nhat Vo
Khanh Bao Huynh
Son T. Luu
Trong-Hop Do
47
1
0
30 Apr 2024
Mapping the Potential of Explainable AI for Fairness Along the AI
  Lifecycle
Mapping the Potential of Explainable AI for Fairness Along the AI Lifecycle
Luca Deck
Astrid Schomacker
Timo Speith
Jakob Schöffer
Lena Kästner
Niklas Kühl
48
4
0
29 Apr 2024
CEval: A Benchmark for Evaluating Counterfactual Text Generation
CEval: A Benchmark for Evaluating Counterfactual Text Generation
Van Bach Nguyen
Jorg Schlotterer
Christin Seifert
41
6
0
26 Apr 2024
LLMs for Generating and Evaluating Counterfactuals: A Comprehensive
  Study
LLMs for Generating and Evaluating Counterfactuals: A Comprehensive Study
Van Bach Nguyen
Paul Youssef
Jorg Schlotterer
Christin Seifert
44
16
0
26 Apr 2024
SIDEs: Separating Idealization from Deceptive Explanations in xAI
SIDEs: Separating Idealization from Deceptive Explanations in xAI
Emily Sullivan
54
2
0
25 Apr 2024
Fiper: a Visual-based Explanation Combining Rules and Feature Importance
Fiper: a Visual-based Explanation Combining Rules and Feature Importance
Eleonora Cappuccio
D. Fadda
Rosa Lanzilotti
Salvatore Rinzivillo
FAtt
42
1
0
25 Apr 2024
MiMICRI: Towards Domain-centered Counterfactual Explanations of
  Cardiovascular Image Classification Models
MiMICRI: Towards Domain-centered Counterfactual Explanations of Cardiovascular Image Classification Models
G. Guo
Lifu Deng
A. Tandon
Alex Endert
Bum Chul Kwon
44
2
0
24 Apr 2024
Explainable AI models for predicting liquefaction-induced lateral
  spreading
Explainable AI models for predicting liquefaction-induced lateral spreading
Cheng-Hsi Hsiao
Krishna Kumar
Ellen Rathje
24
7
0
24 Apr 2024
ChEX: Interactive Localization and Region Description in Chest X-rays
ChEX: Interactive Localization and Region Description in Chest X-rays
Philip Muller
Georgios Kaissis
Daniel Rueckert
35
5
0
24 Apr 2024
Does It Make Sense to Explain a Black Box With Another Black Box?
Does It Make Sense to Explain a Black Box With Another Black Box?
J. Delaunay
Luis Galárraga
Christine Largouet
AAML
29
1
0
23 Apr 2024
Mechanistic Interpretability for AI Safety -- A Review
Mechanistic Interpretability for AI Safety -- A Review
Leonard Bereska
E. Gavves
AI4CE
50
118
0
22 Apr 2024
Explainable Interfaces for Rapid Gaze-Based Interactions in Mixed
  Reality
Explainable Interfaces for Rapid Gaze-Based Interactions in Mixed Reality
Mengjie Yu
Dustin Harris
Ian Jones
Ting Zhang
Yue Liu
...
Krista E. Taylor
Zhenhong Hu
Mary A. Hood
Hrvoje Benko
Tanya R. Jonker
34
0
0
21 Apr 2024
Interval Abstractions for Robust Counterfactual Explanations
Interval Abstractions for Robust Counterfactual Explanations
Junqi Jiang
Francesco Leofante
Antonio Rago
Francesca Toni
46
1
0
21 Apr 2024
A Framework for Feasible Counterfactual Exploration incorporating
  Causality, Sparsity and Density
A Framework for Feasible Counterfactual Exploration incorporating Causality, Sparsity and Density
Kleopatra Markou
Dimitrios Tomaras
V. Kalogeraki
Dimitrios Gunopulos
CML
26
0
0
20 Apr 2024
COIN: Counterfactual inpainting for weakly supervised semantic
  segmentation for medical images
COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images
Dmytro Shvetsov
Joonas Ariva
M. Domnich
Raul Vicente
Dmytro Fishman
MedIm
34
0
0
19 Apr 2024
Enhancing Counterfactual Explanation Search with Diffusion Distance and
  Directional Coherence
Enhancing Counterfactual Explanation Search with Diffusion Distance and Directional Coherence
M. Domnich
Raul Vicente
21
3
0
19 Apr 2024
How should AI decisions be explained? Requirements for Explanations from
  the Perspective of European Law
How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law
Benjamin Frész
Elena Dubovitskaya
Danilo Brajovic
Marco F. Huber
Christian Horz
62
7
0
19 Apr 2024
Prompt-Guided Generation of Structured Chest X-Ray Report Using a
  Pre-trained LLM
Prompt-Guided Generation of Structured Chest X-Ray Report Using a Pre-trained LLM
Hongzhao Li
Hongyu Wang
Xia Sun
Hua He
Jun Feng
37
4
0
17 Apr 2024
CAGE: Causality-Aware Shapley Value for Global Explanations
CAGE: Causality-Aware Shapley Value for Global Explanations
Nils Ole Breuer
Andreas Sauter
Majid Mohammadi
Erman Acar
FAtt
47
3
0
17 Apr 2024
Explainable Generative AI (GenXAI): A Survey, Conceptualization, and
  Research Agenda
Explainable Generative AI (GenXAI): A Survey, Conceptualization, and Research Agenda
Johannes Schneider
83
26
0
15 Apr 2024
Beyond One-Size-Fits-All: Adapting Counterfactual Explanations to User
  Objectives
Beyond One-Size-Fits-All: Adapting Counterfactual Explanations to User Objectives
Orfeas Menis Mastromichalakis
Jason Liartis
Giorgos Stamou
24
1
0
12 Apr 2024
Unraveling the Dilemma of AI Errors: Exploring the Effectiveness of
  Human and Machine Explanations for Large Language Models
Unraveling the Dilemma of AI Errors: Exploring the Effectiveness of Human and Machine Explanations for Large Language Models
Marvin Pafla
Kate Larson
Mark Hancock
48
6
0
11 Apr 2024
Interactive Prompt Debugging with Sequence Salience
Interactive Prompt Debugging with Sequence Salience
Ian Tenney
Ryan Mullins
Bin Du
Shree Pandya
Minsuk Kahng
Lucas Dixon
LRM
40
1
0
11 Apr 2024
Incremental XAI: Memorable Understanding of AI with Incremental
  Explanations
Incremental XAI: Memorable Understanding of AI with Incremental Explanations
Jessica Y. Bo
Pan Hao
Brian Y Lim
CLL
44
7
0
10 Apr 2024
Allowing humans to interactively guide machines where to look does not
  always improve human-AI team's classification accuracy
Allowing humans to interactively guide machines where to look does not always improve human-AI team's classification accuracy
Giang Nguyen
Mohammad Reza Taesiri
Sunnie S. Y. Kim
Anh Totti Nguyen
HAI
AAML
FAtt
42
6
0
08 Apr 2024
Designing for Complementarity: A Conceptual Framework to Go Beyond the
  Current Paradigm of Using XAI in Healthcare
Designing for Complementarity: A Conceptual Framework to Go Beyond the Current Paradigm of Using XAI in Healthcare
Elisa Rubegni
Omran Ayoub
Stefania Maria Rita Rizzo
Marco Barbero
G. Bernegger
Francesca Faraci
Francesca Mangili
Emiliano Soldini
P. Trimboli
Alessandro Facchini
31
1
0
06 Apr 2024
The SaTML '24 CNN Interpretability Competition: New Innovations for
  Concept-Level Interpretability
The SaTML '24 CNN Interpretability Competition: New Innovations for Concept-Level Interpretability
Stephen Casper
Jieun Yun
Joonhyuk Baek
Yeseong Jung
Minhwan Kim
...
A. Nicolson
Arush Tagade
Jessica Rumbelow
Hieu Minh Nguyen
Dylan Hadfield-Menell
32
2
0
03 Apr 2024
Explainability in JupyterLab and Beyond: Interactive XAI Systems for
  Integrated and Collaborative Workflows
Explainability in JupyterLab and Beyond: Interactive XAI Systems for Integrated and Collaborative Workflows
G. Guo
Dustin L. Arendt
Alex Endert
53
1
0
02 Apr 2024
A Survey of Privacy-Preserving Model Explanations: Privacy Risks,
  Attacks, and Countermeasures
A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures
Thanh Tam Nguyen
T. T. Huynh
Zhao Ren
Thanh Toan Nguyen
Phi Le Nguyen
Hongzhi Yin
Quoc Viet Hung Nguyen
83
8
0
31 Mar 2024
Automatic explanation of the classification of Spanish legal judgments
  in jurisdiction-dependent law categories with tree estimators
Automatic explanation of the classification of Spanish legal judgments in jurisdiction-dependent law categories with tree estimators
Jaime González-González
Francisco de Arriba-Pérez
Silvia García-Méndez
Andrea Busto-Castiñeira
Francisco J. González Castaño
AILaw
ELM
44
6
0
30 Mar 2024
Towards a Framework for Evaluating Explanations in Automated Fact
  Verification
Towards a Framework for Evaluating Explanations in Automated Fact Verification
Neema Kotonya
Francesca Toni
42
5
0
29 Mar 2024
Leveraging Counterfactual Paths for Contrastive Explanations of POMDP
  Policies
Leveraging Counterfactual Paths for Contrastive Explanations of POMDP Policies
Benjamin Kraske
Zakariya Laouar
Zachary Sunberg
32
0
0
28 Mar 2024
PIPNet3D: Interpretable Detection of Alzheimer in MRI Scans
PIPNet3D: Interpretable Detection of Alzheimer in MRI Scans
Lisa Anita De Santi
Jorg Schlotterer
Michael Scheschenja
Joel Wessendorf
Meike Nauta
Vincenzo Positano
Christin Seifert
MedIm
38
3
0
27 Mar 2024
Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making
Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making
Shuai Ma
Qiaoyi Chen
Xinru Wang
Chengbo Zheng
Zhenhui Peng
Ming Yin
Xiaojuan Ma
ELM
42
20
0
25 Mar 2024
RankingSHAP -- Listwise Feature Attribution Explanations for Ranking Models
RankingSHAP -- Listwise Feature Attribution Explanations for Ranking Models
Maria Heuss
Maarten de Rijke
Avishek Anand
202
1
0
24 Mar 2024
Previous
12345...232425
Next