ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.07810
  4. Cited By
Manipulating and Measuring Model Interpretability

Manipulating and Measuring Model Interpretability

21 February 2018
Forough Poursabzi-Sangdeh
D. Goldstein
Jake M. Hofman
Jennifer Wortman Vaughan
Hanna M. Wallach
ArXivPDFHTML

Papers citing "Manipulating and Measuring Model Interpretability"

50 / 114 papers shown
Title
Navigating the Rashomon Effect: How Personalization Can Help Adjust Interpretable Machine Learning Models to Individual Users
Navigating the Rashomon Effect: How Personalization Can Help Adjust Interpretable Machine Learning Models to Individual Users
Julian Rosenberger
Philipp Schröppel
Sven Kruschel
Mathias Kraus
Patrick Zschech
Maximilian Förster
FAtt
34
0
0
11 May 2025
Beware of "Explanations" of AI
Beware of "Explanations" of AI
David Martens
Galit Shmueli
Theodoros Evgeniou
Kevin Bauer
Christian Janiesch
...
Claudia Perlich
Wouter Verbeke
Alona Zharova
Patrick Zschech
F. Provost
28
0
0
09 Apr 2025
Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies
Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies
Sunnie S. Y. Kim
J. Vaughan
Q. V. Liao
Tania Lombrozo
Olga Russakovsky
112
5
0
12 Feb 2025
ConSim: Measuring Concept-Based Explanations' Effectiveness with Automated Simulatability
ConSim: Measuring Concept-Based Explanations' Effectiveness with Automated Simulatability
Antonin Poché
Alon Jacovi
Agustin Picard
Victor Boutin
Fanny Jourdan
47
2
0
10 Jan 2025
Citations and Trust in LLM Generated Responses
Yifan Ding
Matthew Facciani
Amrit Poudel
Ellen Joyce
Salvador Aguiñaga
Balaji Veeramani
Sanmitra Bhattacharya
Tim Weninger
HILM
49
3
0
03 Jan 2025
Personalized Help for Optimizing Low-Skilled Users' Strategy
Personalized Help for Optimizing Low-Skilled Users' Strategy
Feng Gu
Wichayaporn Wongkamjan
Jordan Lee Boyd-Graber
Jonathan K. Kummerfeld
Denis Peskoff
Jonathan May
36
0
0
14 Nov 2024
Explainable AI Reloaded: Challenging the XAI Status Quo in the Era of
  Large Language Models
Explainable AI Reloaded: Challenging the XAI Status Quo in the Era of Large Language Models
Upol Ehsan
Mark O. Riedl
28
2
0
09 Aug 2024
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
47
13
0
27 Jul 2024
Graphical Perception of Saliency-based Model Explanations
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAI
FAtt
49
2
0
11 Jun 2024
Data Science Principles for Interpretable and Explainable AI
Data Science Principles for Interpretable and Explainable AI
Kris Sankaran
FaML
47
0
0
17 May 2024
"I'm Not Sure, But...": Examining the Impact of Large Language Models'
  Uncertainty Expression on User Reliance and Trust
"I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust
Sunnie S. Y. Kim
Q. V. Liao
Mihaela Vorvoreanu
Steph Ballard
Jennifer Wortman Vaughan
45
51
0
01 May 2024
Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making
Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making
Shuai Ma
Qiaoyi Chen
Xinru Wang
Chengbo Zheng
Zhenhui Peng
Ming Yin
Xiaojuan Ma
ELM
42
20
0
25 Mar 2024
"Are You Really Sure?" Understanding the Effects of Human
  Self-Confidence Calibration in AI-Assisted Decision Making
"Are You Really Sure?" Understanding the Effects of Human Self-Confidence Calibration in AI-Assisted Decision Making
Shuai Ma
Xinru Wang
Ying Lei
Chuhan Shi
Ming Yin
Xiaojuan Ma
34
24
0
14 Mar 2024
REFRESH: Responsible and Efficient Feature Reselection Guided by SHAP
  Values
REFRESH: Responsible and Efficient Feature Reselection Guided by SHAP Values
Shubham Sharma
Sanghamitra Dutta
Emanuele Albini
Freddy Lecue
Daniele Magazzeni
Manuela Veloso
40
1
0
13 Mar 2024
On the Challenges and Opportunities in Generative AI
On the Challenges and Opportunities in Generative AI
Laura Manduchi
Kushagra Pandey
Robert Bamler
Ryan Cotterell
Sina Daubener
...
F. Wenzel
Frank Wood
Stephan Mandt
Vincent Fortuin
Vincent Fortuin
56
17
0
28 Feb 2024
Succinct Interaction-Aware Explanations
Succinct Interaction-Aware Explanations
Sascha Xu
Joscha Cuppers
Jilles Vreeken
FAtt
26
0
0
08 Feb 2024
On Prediction-Modelers and Decision-Makers: Why Fairness Requires More
  Than a Fair Prediction Model
On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model
Teresa Scantamburlo
Joachim Baumann
Christoph Heitz
FaML
33
5
0
09 Oct 2023
Automatic Concept Embedding Model (ACEM): No train-time concepts, No
  issue!
Automatic Concept Embedding Model (ACEM): No train-time concepts, No issue!
Rishabh Jain
LRM
34
0
0
07 Sep 2023
My Model is Unfair, Do People Even Care? Visual Design Affects Trust and
  Perceived Bias in Machine Learning
My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning
Aimen Gaba
Zhanna Kaufman
Jason Chueng
Marie Shvakel
Kyle Wm. Hall
Yuriy Brun
Cindy Xiong Bearfield
32
14
0
07 Aug 2023
A New Perspective on Evaluation Methods for Explainable Artificial
  Intelligence (XAI)
A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI)
Timo Speith
Markus Langer
34
12
0
26 Jul 2023
AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap
AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap
Q. V. Liao
J. Vaughan
58
159
0
02 Jun 2023
In Search of Verifiability: Explanations Rarely Enable Complementary
  Performance in AI-Advised Decision Making
In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making
Raymond Fok
Daniel S. Weld
34
61
0
12 May 2023
Are Human Explanations Always Helpful? Towards Objective Evaluation of
  Human Natural Language Explanations
Are Human Explanations Always Helpful? Towards Objective Evaluation of Human Natural Language Explanations
Bingsheng Yao
Prithviraj Sen
Lucian Popa
James A. Hendler
Dakuo Wang
XAI
ELM
FAtt
25
10
0
04 May 2023
Towards Evaluating Explanations of Vision Transformers for Medical
  Imaging
Towards Evaluating Explanations of Vision Transformers for Medical Imaging
Piotr Komorowski
Hubert Baniecki
P. Biecek
MedIm
38
27
0
12 Apr 2023
A Review on Explainable Artificial Intelligence for Healthcare: Why,
  How, and When?
A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?
M. Rubaiyat
Hossain Mondal
Prajoy Podder
28
56
0
10 Apr 2023
Distrust in (X)AI -- Measurement Artifact or Distinct Construct?
Distrust in (X)AI -- Measurement Artifact or Distinct Construct?
Nicolas Scharowski
S. Perrig
HILM
24
3
0
29 Mar 2023
Evaluating self-attention interpretability through human-grounded
  experimental protocol
Evaluating self-attention interpretability through human-grounded experimental protocol
Milan Bhan
Nina Achache
Victor Legrand
A. Blangero
Nicolas Chesneau
31
9
0
27 Mar 2023
How Accurate Does It Feel? -- Human Perception of Different Types of
  Classification Mistakes
How Accurate Does It Feel? -- Human Perception of Different Types of Classification Mistakes
A. Papenmeier
Dagmar Kern
Daniel Hienert
Yvonne Kammerer
C. Seifert
36
19
0
13 Feb 2023
Appropriate Reliance on AI Advice: Conceptualization and the Effect of
  Explanations
Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations
Max Schemmer
Niklas Kühl
Carina Benz
Andrea Bartos
G. Satzger
21
98
0
04 Feb 2023
Charting the Sociotechnical Gap in Explainable AI: A Framework to
  Address the Gap in XAI
Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI
Upol Ehsan
Koustuv Saha
M. D. Choudhury
Mark O. Riedl
26
57
0
01 Feb 2023
Understanding the Role of Human Intuition on Reliance in Human-AI
  Decision-Making with Explanations
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Valerie Chen
Q. V. Liao
Jennifer Wortman Vaughan
Gagan Bansal
49
105
0
18 Jan 2023
Improving Human-AI Collaboration With Descriptions of AI Behavior
Improving Human-AI Collaboration With Descriptions of AI Behavior
Ángel Alexander Cabrera
Adam Perer
Jason I. Hong
40
34
0
06 Jan 2023
On the Relationship Between Explanation and Prediction: A Causal View
On the Relationship Between Explanation and Prediction: A Causal View
Amir-Hossein Karimi
Krikamol Muandet
Simon Kornblith
Bernhard Schölkopf
Been Kim
FAtt
CML
40
14
0
13 Dec 2022
Concept-based Explanations using Non-negative Concept Activation Vectors
  and Decision Tree for CNN Models
Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models
Gayda Mutahar
Tim Miller
FAtt
29
6
0
19 Nov 2022
An Interpretable Hybrid Predictive Model of COVID-19 Cases using
  Autoregressive Model and LSTM
An Interpretable Hybrid Predictive Model of COVID-19 Cases using Autoregressive Model and LSTM
Yangyi Zhang
Sui Tang
Guo-Ding Yu
24
11
0
14 Nov 2022
"Help Me Help the AI": Understanding How Explainability Can Support
  Human-AI Interaction
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
Andrés Monroy-Hernández
43
108
0
02 Oct 2022
Learning When to Advise Human Decision Makers
Learning When to Advise Human Decision Makers
Gali Noti
Yiling Chen
50
15
0
27 Sep 2022
Explanations, Fairness, and Appropriate Reliance in Human-AI
  Decision-Making
Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Jakob Schoeffer
Maria De-Arteaga
Niklas Kuehl
FaML
55
46
0
23 Sep 2022
Advancing Human-AI Complementarity: The Impact of User Expertise and
  Algorithmic Tuning on Joint Decision Making
Advancing Human-AI Complementarity: The Impact of User Expertise and Algorithmic Tuning on Joint Decision Making
K. Inkpen
Shreya Chappidi
Keri Mallari
Besmira Nushi
Divya Ramesh
Pietro Michelucci
Vani Mandava
Libuvse Hannah Vepvrek
Gabrielle Quinn
36
46
0
16 Aug 2022
"Is It My Turn?" Assessing Teamwork and Taskwork in Collaborative
  Immersive Analytics
"Is It My Turn?" Assessing Teamwork and Taskwork in Collaborative Immersive Analytics
Michaela Benk
Raphael P. Weibel
Stefan Feuerriegel
Andrea Ferrario
31
3
0
09 Aug 2022
A Human-Centric Take on Model Monitoring
A Human-Centric Take on Model Monitoring
Murtuza N. Shergadwala
Himabindu Lakkaraju
K. Kenthapadi
45
9
0
06 Jun 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Use-Case-Grounded Simulations for Explanation Evaluation
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAtt
ELM
27
24
0
05 Jun 2022
Interpretation Quality Score for Measuring the Quality of
  interpretability methods
Interpretation Quality Score for Measuring the Quality of interpretability methods
Sean Xie
Soroush Vosoughi
Saeed Hassanpour
XAI
21
5
0
24 May 2022
Who Goes First? Influences of Human-AI Workflow on Decision Making in
  Clinical Imaging
Who Goes First? Influences of Human-AI Workflow on Decision Making in Clinical Imaging
Riccardo Fogliato
Shreya Chappidi
M. Lungren
Michael Fitzke
Mark Parkinson
Diane U Wilson
Paul Fisher
Eric Horvitz
K. Inkpen
Besmira Nushi
24
70
0
19 May 2022
The Road to Explainability is Paved with Bias: Measuring the Fairness of
  Explanations
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
45
78
0
06 May 2022
Interactive Model Cards: A Human-Centered Approach to Model
  Documentation
Interactive Model Cards: A Human-Centered Approach to Model Documentation
Anamaria Crisan
Margaret Drouhard
Jesse Vig
Nazneen Rajani
HAI
40
87
0
05 May 2022
Human-AI Collaboration via Conditional Delegation: A Case Study of
  Content Moderation
Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation
Vivian Lai
Samuel Carton
Rajat Bhatnagar
Vera Liao
Yunfeng Zhang
Chenhao Tan
39
130
0
25 Apr 2022
Should I Follow AI-based Advice? Measuring Appropriate Reliance in
  Human-AI Decision-Making
Should I Follow AI-based Advice? Measuring Appropriate Reliance in Human-AI Decision-Making
Max Schemmer
Patrick Hemmer
Niklas Kühl
Carina Benz
G. Satzger
19
56
0
14 Apr 2022
Heterogeneity in Algorithm-Assisted Decision-Making: A Case Study in
  Child Abuse Hotline Screening
Heterogeneity in Algorithm-Assisted Decision-Making: A Case Study in Child Abuse Hotline Screening
Ling-chi Cheng
Alexandra Chouldechova
19
13
0
12 Apr 2022
Improving Human-AI Partnerships in Child Welfare: Understanding Worker
  Practices, Challenges, and Desires for Algorithmic Decision Support
Improving Human-AI Partnerships in Child Welfare: Understanding Worker Practices, Challenges, and Desires for Algorithmic Decision Support
Anna Kawakami
Venkatesh Sivaraman
H. Cheng
Logan Stapleton
Yanghuidi Cheng
Diana Qing
Adam Perer
Zhiwei Steven Wu
Haiyi Zhu
Kenneth Holstein
39
107
0
05 Apr 2022
123
Next