ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.08641
  4. Cited By
Best of both worlds: local and global explanations with
  human-understandable concepts

Best of both worlds: local and global explanations with human-understandable concepts

16 June 2021
Jessica Schrouff
Sebastien Baur
Shaobo Hou
Diana Mincu
Eric Loreaux
Ralph Blanes
James Wexler
Alan Karthikesalingam
Been Kim
    FAtt
ArXivPDFHTML

Papers citing "Best of both worlds: local and global explanations with human-understandable concepts"

8 / 8 papers shown
Title
Representational Similarity via Interpretable Visual Concepts
Representational Similarity via Interpretable Visual Concepts
Neehar Kondapaneni
Oisin Mac Aodha
Pietro Perona
DRL
219
0
0
19 Mar 2025
Variational Language Concepts for Interpreting Foundation Language
  Models
Variational Language Concepts for Interpreting Foundation Language Models
Hengyi Wang
Shiwei Tan
Zhiqing Hong
Desheng Zhang
Hao Wang
34
3
0
04 Oct 2024
Concept Distillation: Leveraging Human-Centered Explanations for Model
  Improvement
Concept Distillation: Leveraging Human-Centered Explanations for Model Improvement
Avani Gupta
Saurabh Saini
P. J. Narayanan
33
6
0
26 Nov 2023
Interpretability in Activation Space Analysis of Transformers: A Focused
  Survey
Interpretability in Activation Space Analysis of Transformers: A Focused Survey
Soniya Vijayakumar
AI4CE
35
3
0
22 Jan 2023
Predicting and Explaining Mobile UI Tappability with Vision Modeling and
  Saliency Analysis
Predicting and Explaining Mobile UI Tappability with Vision Modeling and Saliency Analysis
E. Schoop
Xin Zhou
Gang Li
Zhourong Chen
Björn Hartmann
Yang Li
HAI
FAtt
32
32
0
05 Apr 2022
Human-Centered Concept Explanations for Neural Networks
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
42
25
0
25 Feb 2022
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,696
0
28 Feb 2017
1