ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2001.00396
  4. Cited By
Restricting the Flow: Information Bottlenecks for Attribution

Restricting the Flow: Information Bottlenecks for Attribution

2 January 2020
Karl Schulz
Leon Sixt
Federico Tombari
Tim Landgraf
    FAtt
ArXivPDFHTML

Papers citing "Restricting the Flow: Information Bottlenecks for Attribution"

39 / 39 papers shown
Title
DocVXQA: Context-Aware Visual Explanations for Document Question Answering
DocVXQA: Context-Aware Visual Explanations for Document Question Answering
Mohamed Ali Souibgui
Changkyu Choi
Andrey Barsky
Kangsoo Jung
Ernest Valveny
Dimosthenis Karatzas
28
0
0
12 May 2025
Class-Dependent Perturbation Effects in Evaluating Time Series Attributions
Class-Dependent Perturbation Effects in Evaluating Time Series Attributions
Gregor Baer
Isel Grau
Chao Zhang
Pieter Van Gorp
AAML
53
0
0
24 Feb 2025
Narrowing Information Bottleneck Theory for Multimodal Image-Text Representations Interpretability
Narrowing Information Bottleneck Theory for Multimodal Image-Text Representations Interpretability
Zhiyu Zhu
Zhibo Jin
Jiayu Zhang
Nan Yang
Jiahao Huang
Jianlong Zhou
Fang Chen
44
0
0
16 Feb 2025
Extending Information Bottleneck Attribution to Video Sequences
Extending Information Bottleneck Attribution to Video Sequences
Veronika Solopova
Lucas Schmidt
Dorothea Kolossa
47
0
0
28 Jan 2025
Variational Language Concepts for Interpreting Foundation Language
  Models
Variational Language Concepts for Interpreting Foundation Language Models
Hengyi Wang
Shiwei Tan
Zhiqing Hong
Desheng Zhang
Hao Wang
34
3
0
04 Oct 2024
Prototypical Self-Explainable Models Without Re-training
Prototypical Self-Explainable Models Without Re-training
Srishti Gautam
Ahcène Boubekki
Marina M.-C. Höhne
Michael C. Kampffmeyer
34
2
0
13 Dec 2023
HarsanyiNet: Computing Accurate Shapley Values in a Single Forward
  Propagation
HarsanyiNet: Computing Accurate Shapley Values in a Single Forward Propagation
Lu Chen
Siyu Lou
Keyan Zhang
Jin Huang
Quanshi Zhang
TDI
FAtt
22
9
0
04 Apr 2023
Task-specific Fine-tuning via Variational Information Bottleneck for
  Weakly-supervised Pathology Whole Slide Image Classification
Task-specific Fine-tuning via Variational Information Bottleneck for Weakly-supervised Pathology Whole Slide Image Classification
Honglin Li
Chenglu Zhu
Yunlong Zhang
Yuxuan Sun
Zhongyi Shui
Wenwei Kuang
S. Zheng
L. Yang
69
57
0
15 Mar 2023
Opti-CAM: Optimizing saliency maps for interpretability
Opti-CAM: Optimizing saliency maps for interpretability
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
36
22
0
17 Jan 2023
Comparing the Decision-Making Mechanisms by Transformers and CNNs via
  Explanation Methods
Comparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation Methods
Ming-Xiu Jiang
Saeed Khorram
Li Fuxin
FAtt
22
9
0
13 Dec 2022
Interpretability with full complexity by constraining feature
  information
Interpretability with full complexity by constraining feature information
Kieran A. Murphy
Danielle Bassett
FAtt
35
5
0
30 Nov 2022
Evaluating Feature Attribution Methods for Electrocardiogram
Evaluating Feature Attribution Methods for Electrocardiogram
J. Suh
Jimyeong Kim
Euna Jung
Wonjong Rhee
FAtt
22
2
0
23 Nov 2022
AD-DROP: Attribution-Driven Dropout for Robust Language Model
  Fine-Tuning
AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning
Tao Yang
Jinghao Deng
Xiaojun Quan
Qifan Wang
Shaoliang Nie
32
3
0
12 Oct 2022
A clinically motivated self-supervised approach for content-based image
  retrieval of CT liver images
A clinically motivated self-supervised approach for content-based image retrieval of CT liver images
Kristoffer Wickstrøm
Eirik Agnalt Ostmo
Keyur Radiya
Karl Øyvind Mikalsen
Michael C. Kampffmeyer
Robert Jenssen
SSL
26
13
0
11 Jul 2022
Variational Distillation for Multi-View Learning
Variational Distillation for Multi-View Learning
Xudong Tian
Zhizhong Zhang
Cong Wang
Wensheng Zhang
Yanyun Qu
Lizhuang Ma
Zongze Wu
Yuan Xie
Dacheng Tao
26
5
0
20 Jun 2022
Visualizing Deep Neural Networks with Topographic Activation Maps
Visualizing Deep Neural Networks with Topographic Activation Maps
A. Krug
Raihan Kabir Ratul
Christopher Olson
Sebastian Stober
FAtt
AI4CE
36
3
0
07 Apr 2022
Towards Explainable Evaluation Metrics for Natural Language Generation
Towards Explainable Evaluation Metrics for Natural Language Generation
Christoph Leiter
Piyawat Lertvittayakumjorn
M. Fomicheva
Wei-Ye Zhao
Yang Gao
Steffen Eger
AAML
ELM
30
20
0
21 Mar 2022
Listen to Interpret: Post-hoc Interpretability for Audio Networks with
  NMF
Listen to Interpret: Post-hoc Interpretability for Audio Networks with NMF
Jayneel Parekh
Sanjeel Parekh
Pavlo Mozharovskyi
Florence dÁlché-Buc
G. Richard
21
22
0
23 Feb 2022
Improving Subgraph Recognition with Variational Graph Information
  Bottleneck
Improving Subgraph Recognition with Variational Graph Information Bottleneck
Junchi Yu
Jie Cao
Ran He
22
53
0
18 Dec 2021
Automatic Neural Network Pruning that Efficiently Preserves the Model
  Accuracy
Automatic Neural Network Pruning that Efficiently Preserves the Model Accuracy
Thibault Castells
Seul-Ki Yeom
3DV
18
3
0
18 Nov 2021
Reducing Information Bottleneck for Weakly Supervised Semantic
  Segmentation
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation
Jungbeom Lee
Jooyoung Choi
J. Mok
Sungroh Yoon
SSeg
220
134
0
13 Oct 2021
The Eval4NLP Shared Task on Explainable Quality Estimation: Overview and
  Results
The Eval4NLP Shared Task on Explainable Quality Estimation: Overview and Results
M. Fomicheva
Piyawat Lertvittayakumjorn
Wei-Ye Zhao
Steffen Eger
Yang Gao
ELM
24
39
0
08 Oct 2021
Discovery of New Multi-Level Features for Domain Generalization via
  Knowledge Corruption
Discovery of New Multi-Level Features for Domain Generalization via Knowledge Corruption
A. Frikha
Denis Krompass
Volker Tresp
OOD
35
1
0
09 Sep 2021
Translation Error Detection as Rationale Extraction
Translation Error Detection as Rationale Extraction
M. Fomicheva
Lucia Specia
Nikolaos Aletras
21
23
0
27 Aug 2021
Explaining COVID-19 and Thoracic Pathology Model Predictions by
  Identifying Informative Input Features
Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features
Ashkan Khakzar
Yang Zhang
W. Mansour
Yuezhi Cai
Yawei Li
Yucheng Zhang
Seong Tae Kim
Nassir Navab
FAtt
52
17
0
01 Apr 2021
Efficient Explanations from Empirical Explainers
Efficient Explanations from Empirical Explainers
Robert Schwarzenberg
Nils Feldhus
Sebastian Möller
FAtt
32
9
0
29 Mar 2021
Explaining Representation by Mutual Information
Explaining Representation by Mutual Information
Li Gu
SSL
FAtt
34
0
0
28 Mar 2021
BBAM: Bounding Box Attribution Map for Weakly Supervised Semantic and
  Instance Segmentation
BBAM: Bounding Box Attribution Map for Weakly Supervised Semantic and Instance Segmentation
Jungbeom Lee
Jihun Yi
Chaehun Shin
Sungroh Yoon
ISeg
24
172
0
16 Mar 2021
Inserting Information Bottlenecks for Attribution in Transformers
Inserting Information Bottlenecks for Attribution in Transformers
Zhiying Jiang
Raphael Tang
Ji Xin
Jimmy J. Lin
38
6
0
27 Dec 2020
Explainable Abstract Trains Dataset
Explainable Abstract Trains Dataset
Manuel de Sousa Ribeiro
L. Krippahl
João Leite
13
8
0
15 Dec 2020
Right for the Right Concept: Revising Neuro-Symbolic Concepts by
  Interacting with their Explanations
Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations
Wolfgang Stammer
P. Schramowski
Kristian Kersting
FAtt
14
107
0
25 Nov 2020
Explaining by Removing: A Unified Framework for Model Explanation
Explaining by Removing: A Unified Framework for Model Explanation
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
39
241
0
21 Nov 2020
Feature Removal Is a Unifying Principle for Model Explanation Methods
Feature Removal Is a Unifying Principle for Model Explanation Methods
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
33
33
0
06 Nov 2020
Learning Variational Word Masks to Improve the Interpretability of
  Neural Text Classifiers
Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
Hanjie Chen
Yangfeng Ji
AAML
VLM
15
63
0
01 Oct 2020
Interpreting Graph Neural Networks for NLP With Differentiable Edge
  Masking
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
M. Schlichtkrull
Nicola De Cao
Ivan Titov
AI4CE
36
214
0
01 Oct 2020
Local and Global Explanations of Agent Behavior: Integrating Strategy
  Summaries with Saliency Maps
Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps
Tobias Huber
Katharina Weitz
Elisabeth André
Ofra Amir
FAtt
21
64
0
18 May 2020
Adversarial Attacks and Defenses: An Interpretation Perspective
Adversarial Attacks and Defenses: An Interpretation Perspective
Ninghao Liu
Mengnan Du
Ruocheng Guo
Huan Liu
Xia Hu
AAML
26
8
0
23 Apr 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
38
300
0
08 Jan 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
13
132
0
20 Dec 2019
1