ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.09345
  4. Cited By
A Framework to Learn with Interpretation

A Framework to Learn with Interpretation

19 October 2020
Jayneel Parekh
Pavlo Mozharovskyi
Florence dÁlché-Buc
    AI4CE
    FAtt
ArXivPDFHTML

Papers citing "A Framework to Learn with Interpretation"

10 / 10 papers shown
Title
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Jayneel Parekh
Quentin Bouniot
Pavlo Mozharovskyi
A. Newson
Florence dÁlché-Buc
SSL
63
1
0
01 Jul 2024
Prototypical Self-Explainable Models Without Re-training
Prototypical Self-Explainable Models Without Re-training
Srishti Gautam
Ahcène Boubekki
Marina M.-C. Höhne
Michael C. Kampffmeyer
28
2
0
13 Dec 2023
Interpretability-Aware Vision Transformer
Interpretability-Aware Vision Transformer
Yao Qiang
Chengyin Li
Prashant Khanduri
D. Zhu
ViT
82
7
0
14 Sep 2023
BELLA: Black box model Explanations by Local Linear Approximations
BELLA: Black box model Explanations by Local Linear Approximations
N. Radulovic
Albert Bifet
Fabian M. Suchanek
FAtt
34
1
0
18 May 2023
Posthoc Interpretation via Quantization
Posthoc Interpretation via Quantization
Francesco Paissan
Cem Subakan
Mirco Ravanelli
MQ
16
6
0
22 Mar 2023
Listen to Interpret: Post-hoc Interpretability for Audio Networks with
  NMF
Listen to Interpret: Post-hoc Interpretability for Audio Networks with NMF
Jayneel Parekh
Sanjeel Parekh
Pavlo Mozharovskyi
Florence dÁlché-Buc
G. Richard
16
22
0
23 Feb 2022
This looks more like that: Enhancing Self-Explaining Models by
  Prototypical Relevance Propagation
This looks more like that: Enhancing Self-Explaining Models by Prototypical Relevance Propagation
Srishti Gautam
Marina M.-C. Höhne
Stine Hansen
Robert Jenssen
Michael C. Kampffmeyer
24
49
0
27 Aug 2021
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,684
0
28 Feb 2017
1