ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.07810
  4. Cited By
Manipulating and Measuring Model Interpretability

Manipulating and Measuring Model Interpretability

21 February 2018
Forough Poursabzi-Sangdeh
D. Goldstein
Jake M. Hofman
Jennifer Wortman Vaughan
Hanna M. Wallach
ArXivPDFHTML

Papers citing "Manipulating and Measuring Model Interpretability"

14 / 114 papers shown
Title
ViCE: Visual Counterfactual Explanations for Machine Learning Models
ViCE: Visual Counterfactual Explanations for Machine Learning Models
Oscar Gomez
Steffen Holter
Jun Yuan
E. Bertini
AAML
59
93
0
05 Mar 2020
Evaluating Saliency Map Explanations for Convolutional Neural Networks:
  A User Study
Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
AAML
FAtt
XAI
33
197
0
03 Feb 2020
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating
  Explainable AI Systems
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
Zana Buçinca
Phoebe Lin
Krzysztof Z. Gajos
Elena L. Glassman
ELM
22
280
0
22 Jan 2020
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials
  for Humans
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans
Vivian Lai
Han Liu
Chenhao Tan
35
139
0
14 Jan 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
52
702
0
08 Jan 2020
Effect of Confidence and Explanation on Accuracy and Trust Calibration
  in AI-Assisted Decision Making
Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Yunfeng Zhang
Q. V. Liao
Rachel K. E. Bellamy
28
662
0
07 Jan 2020
Measurement and Fairness
Measurement and Fairness
Abigail Z. Jacobs
Hanna M. Wallach
14
381
0
11 Dec 2019
Towards Quantification of Explainability in Explainable Artificial
  Intelligence Methods
Towards Quantification of Explainability in Explainable Artificial Intelligence Methods
Sheikh Rabiul Islam
W. Eberle
S. Ghafoor
XAI
22
42
0
22 Nov 2019
A Human-Grounded Evaluation of SHAP for Alert Processing
A Human-Grounded Evaluation of SHAP for Alert Processing
Hilde J. P. Weerts
Werner van Ipenburg
Mykola Pechenizkiy
FAtt
11
70
0
07 Jul 2019
Leveraging Latent Features for Local Explanations
Leveraging Latent Features for Local Explanations
Ronny Luss
Pin-Yu Chen
Amit Dhurandhar
P. Sattigeri
Yunfeng Zhang
Karthikeyan Shanmugam
Chun-Chen Tu
FAtt
54
37
0
29 May 2019
Learning Representations by Humans, for Humans
Learning Representations by Humans, for Humans
Sophie Hilgard
Nir Rosenfeld
M. Banaji
Jack Cao
David C. Parkes
OCL
HAI
AI4CE
34
29
0
29 May 2019
From What to How: An Initial Review of Publicly Available AI Ethics
  Tools, Methods and Research to Translate Principles into Practices
From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices
Jessica Morley
Luciano Floridi
Libby Kinsey
Anat Elhalal
16
56
0
15 May 2019
"Why did you do that?": Explaining black box models with Inductive
  Synthesis
"Why did you do that?": Explaining black box models with Inductive Synthesis
Görkem Paçaci
David Johnson
S. McKeever
A. Hamfelt
25
6
0
17 Apr 2019
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,092
0
24 Oct 2016
Previous
123