ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.04710
  4. Cited By
Visual Evaluative AI: A Hypothesis-Driven Tool with Concept-Based Explanations and Weight of Evidence

Visual Evaluative AI: A Hypothesis-Driven Tool with Concept-Based Explanations and Weight of Evidence

13 May 2024
Thao Le
Tim Miller
Ruihan Zhang
L. Sonenberg
Ronal Singh
ArXivPDFHTML

Papers citing "Visual Evaluative AI: A Hypothesis-Driven Tool with Concept-Based Explanations and Weight of Evidence"

10 / 10 papers shown
Title
Towards the New XAI: A Hypothesis-Driven Approach to Decision Support
  Using Evidence
Towards the New XAI: A Hypothesis-Driven Approach to Decision Support Using Evidence
Thao Le
Tim Miller
L. Sonenberg
Ronal Singh
38
4
0
02 Feb 2024
Towards Trustable Skin Cancer Diagnosis via Rewriting Model's Decision
Towards Trustable Skin Cancer Diagnosis via Rewriting Model's Decision
Siyuan Yan
Zhen Yu
Xuelin Zhang
Dwarikanath Mahapatra
Shekhar S. Chandra
Monika Janda
Peter Soyer
Z. Ge
58
26
0
02 Mar 2023
Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven
  decision support
Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven decision support
Tim Miller
69
126
0
24 Feb 2023
Post-hoc Concept Bottleneck Models
Post-hoc Concept Bottleneck Models
Mert Yuksekgonul
Maggie Wang
James Zou
203
194
0
31 May 2022
From Human Explanation to Model Interpretability: A Framework Based on
  Weight of Evidence
From Human Explanation to Model Interpretability: A Framework Based on Weight of Evidence
David Alvarez-Melis
Harmanpreet Kaur
Hal Daumé
Hanna M. Wallach
Jennifer Wortman Vaughan
FAtt
82
30
0
27 Apr 2021
Invertible Concept-based Explanations for CNN Models with Non-negative
  Concept Activation Vectors
Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors
Ruihan Zhang
Prashan Madumal
Tim Miller
Krista A. Ehinger
Benjamin I. P. Rubinstein
FAtt
53
103
0
27 Jun 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on
  Complementary Team Performance
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
70
595
0
26 Jun 2020
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
211
1,837
0
30 Nov 2017
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Zhuowen Tu
Kaiming He
503
10,318
0
16 Nov 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
193,814
0
10 Dec 2015
1