ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.03208
  4. Cited By
From Attribution Maps to Human-Understandable Explanations through
  Concept Relevance Propagation

From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation

7 June 2022
Reduan Achtibat
Maximilian Dreyer
Ilona Eisenbraun
S. Bosse
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
    FAtt
ArXivPDFHTML

Papers citing "From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation"

50 / 83 papers shown
Title
Wasserstein Distances Made Explainable: Insights into Dataset Shifts and Transport Phenomena
Wasserstein Distances Made Explainable: Insights into Dataset Shifts and Transport Phenomena
Philip Naumann
Jacob R. Kauffmann
G. Montavon
26
0
0
09 May 2025
Prisma: An Open Source Toolkit for Mechanistic Interpretability in Vision and Video
Prisma: An Open Source Toolkit for Mechanistic Interpretability in Vision and Video
Sonia Joseph
Praneet Suresh
Lorenz Hufe
Edward Stevinson
Robert Graham
Yash Vadi
Danilo Bzdok
Sebastian Lapuschkin
Lee Sharkey
Blake A. Richards
72
0
0
28 Apr 2025
Comparing Uncertainty Measurement and Mitigation Methods for Large Language Models: A Systematic Review
Comparing Uncertainty Measurement and Mitigation Methods for Large Language Models: A Systematic Review
Toghrul Abbasli
Kentaroh Toyoda
Yuan Wang
Leon Witt
Muhammad Asif Ali
Yukai Miao
Dan Li
Qingsong Wei
UQCV
92
0
0
25 Apr 2025
PCBEAR: Pose Concept Bottleneck for Explainable Action Recognition
PCBEAR: Pose Concept Bottleneck for Explainable Action Recognition
Jongseo Lee
Wooil Lee
Gyeong-Moon Park
Seong Tae Kim
Jinwoo Choi
33
0
0
17 Apr 2025
Tokenize Image Patches: Global Context Fusion for Effective Haze Removal in Large Images
Tokenize Image Patches: Global Context Fusion for Effective Haze Removal in Large Images
Jiuchen Chen
Xinyu Yan
Qizhi Xu
Kaiqi Li
VLM
32
0
0
13 Apr 2025
On Background Bias of Post-Hoc Concept Embeddings in Computer Vision DNNs
On Background Bias of Post-Hoc Concept Embeddings in Computer Vision DNNs
Gesina Schwalbe
Georgii Mikriukov
Edgar Heinert
Stavros Gerolymatos
Mert Keser
Alois Knoll
Matthias Rottmann
Annika Mütze
31
0
0
11 Apr 2025
Pairwise Matching of Intermediate Representations for Fine-grained Explainability
Pairwise Matching of Intermediate Representations for Fine-grained Explainability
Lauren Shrack
T. Haucke
Antoine Salaün
Arjun Subramonian
Sara Beery
49
0
0
28 Mar 2025
Representational Similarity via Interpretable Visual Concepts
Representational Similarity via Interpretable Visual Concepts
Neehar Kondapaneni
Oisin Mac Aodha
Pietro Perona
DRL
166
0
0
19 Mar 2025
CoE: Chain-of-Explanation via Automatic Visual Concept Circuit Description and Polysemanticity Quantification
CoE: Chain-of-Explanation via Automatic Visual Concept Circuit Description and Polysemanticity Quantification
Wenlong Yu
Qilong Wang
Chuang Liu
Dong Li
Q. Hu
LRM
60
0
0
19 Mar 2025
Post-Hoc Concept Disentanglement: From Correlated to Isolated Concept Representations
Eren Erogullari
Sebastian Lapuschkin
Wojciech Samek
Frederik Pahde
LLMSV
CoGe
62
0
0
07 Mar 2025
Causally Reliable Concept Bottleneck Models
Giovanni De Felice
Arianna Casanova Flores
Francesco De Santis
Silvia Santini
Johannes Schneider
Pietro Barbiero
Alberto Termine
74
2
0
06 Mar 2025
Conceptualizing Uncertainty
Isaac Roberts
Alexander Schulz
Sarah Schroeder
Fabian Hinder
Barbara Hammer
UD
82
0
0
05 Mar 2025
NeurFlow: Interpreting Neural Networks through Neuron Groups and Functional Interactions
NeurFlow: Interpreting Neural Networks through Neuron Groups and Functional Interactions
Tue Cao
Nhat X. Hoang
Hieu H. Pham
P. Nguyen
My T. Thai
88
0
0
22 Feb 2025
A Close Look at Decomposition-based XAI-Methods for Transformer Language Models
A Close Look at Decomposition-based XAI-Methods for Transformer Language Models
L. Arras
Bruno Puri
Patrick Kahardipraja
Sebastian Lapuschkin
Wojciech Samek
46
0
0
21 Feb 2025
FaceX: Understanding Face Attribute Classifiers through Summary Model
  Explanations
FaceX: Understanding Face Attribute Classifiers through Summary Model Explanations
Ioannis Sarridis
C. Koutlis
Symeon Papadopoulos
Christos Diou
CVBM
107
0
0
10 Dec 2024
Aligning Generalisation Between Humans and Machines
Aligning Generalisation Between Humans and Machines
Filip Ilievski
Barbara Hammer
F. V. Harmelen
Benjamin Paassen
S. Saralajew
...
Vered Shwartz
Gabriella Skitalinskaya
Clemens Stachl
Gido M. van de Ven
T. Villmann
78
1
0
23 Nov 2024
Explainable Artificial Intelligence for Medical Applications: A Review
Explainable Artificial Intelligence for Medical Applications: A Review
Qiyang Sun
Alican Akman
Björn Schuller
86
6
0
15 Nov 2024
Visual Question Answering in Ophthalmology: A Progressive and Practical
  Perspective
Visual Question Answering in Ophthalmology: A Progressive and Practical Perspective
Xiaolan Chen
Ruoyu Chen
Pusheng Xu
Weiyi Zhang
Xianwen Shang
M. He
Danli Shi
23
1
0
22 Oct 2024
Study on the Helpfulness of Explainable Artificial Intelligence
Study on the Helpfulness of Explainable Artificial Intelligence
Tobias Labarta
Elizaveta Kulicheva
Ronja Froelian
Christian Geißler
Xenia Melman
Julian von Klitzing
ELM
33
0
0
14 Oct 2024
Sui Generis: Large Language Models for Authorship Attribution and
  Verification in Latin
Sui Generis: Large Language Models for Authorship Attribution and Verification in Latin
Gleb Schmidt
Svetlana Gorovaia
Ivan P. Yamshchikov
29
1
0
11 Oct 2024
Synthetic Generation of Dermatoscopic Images with GAN and Closed-Form
  Factorization
Synthetic Generation of Dermatoscopic Images with GAN and Closed-Form Factorization
R. Mekala
Frederik Pahde
Simon Baur
Sneha Chandrashekar
Madeline Diep
...
Jackie Ma
Peter Eisert
Mikael Lindvall
Adam A. Porter
Wojciech Samek
MedIm
GAN
25
2
0
07 Oct 2024
Self-eXplainable AI for Medical Image Analysis: A Survey and New
  Outlooks
Self-eXplainable AI for Medical Image Analysis: A Survey and New Outlooks
Junlin Hou
Sicen Liu
Yequan Bie
Hongmei Wang
Andong Tan
Luyang Luo
Hao Chen
XAI
25
3
0
03 Oct 2024
Concept-Based Explanations in Computer Vision: Where Are We and Where
  Could We Go?
Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?
Jae Hee Lee
Georgii Mikriukov
Gesina Schwalbe
Stefan Wermter
D. Wolter
55
2
0
20 Sep 2024
Decompose the model: Mechanistic interpretability in image models with
  Generalized Integrated Gradients (GIG)
Decompose the model: Mechanistic interpretability in image models with Generalized Integrated Gradients (GIG)
Yearim Kim
Sangyu Han
Sangbum Han
Nojun Kwak
55
0
0
03 Sep 2024
Towards Symbolic XAI -- Explanation Through Human Understandable Logical
  Relationships Between Features
Towards Symbolic XAI -- Explanation Through Human Understandable Logical Relationships Between Features
Thomas Schnake
Farnoush Rezaei Jafaria
Jonas Lederer
Ping Xiong
Shinichi Nakajima
Stefan Gugler
G. Montavon
Klaus-Robert Müller
43
4
0
30 Aug 2024
Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune
  CNNs and Transformers
Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers
Sayed Mohammad Vakilzadeh Hatefi
Maximilian Dreyer
Reduan Achtibat
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
ViT
34
1
0
22 Aug 2024
The Contribution of XAI for the Safe Development and Certification of
  AI: An Expert-Based Analysis
The Contribution of XAI for the Safe Development and Certification of AI: An Expert-Based Analysis
Benjamin Frész
Vincent Philipp Goebels
Safa Omri
Danilo Brajovic
Andreas Aichele
Janika Kutz
Jens Neuhüttler
Marco F. Huber
28
0
0
22 Jul 2024
Understanding Visual Feature Reliance through the Lens of Complexity
Understanding Visual Feature Reliance through the Lens of Complexity
Thomas Fel
Louis Bethune
Andrew Kyle Lampinen
Thomas Serre
Katherine Hermann
FAtt
CoGe
35
6
0
08 Jul 2024
Model Guidance via Explanations Turns Image Classifiers into
  Segmentation Models
Model Guidance via Explanations Turns Image Classifiers into Segmentation Models
Xiaoyan Yu
Jannik Franzen
Wojciech Samek
Marina M.-C. Höhne
Dagmar Kainmueller
41
0
0
03 Jul 2024
Human-like object concept representations emerge naturally in multimodal
  large language models
Human-like object concept representations emerge naturally in multimodal large language models
Changde Du
Kaicheng Fu
Bincheng Wen
Yi Sun
Jie Peng
...
Chuncheng Zhang
Jinpeng Li
Shuang Qiu
Le Chang
Huiguang He
32
1
0
01 Jul 2024
Enabling Regional Explainability by Automatic and Model-agnostic Rule
  Extraction
Enabling Regional Explainability by Automatic and Model-agnostic Rule Extraction
Yu Chen
Tianyu Cui
Alexander Capstick
Nan Fletcher-Loyd
Payam Barnaghi
33
0
0
25 Jun 2024
A Moonshot for AI Oracles in the Sciences
A Moonshot for AI Oracles in the Sciences
Bryan Kaiser
Tailin Wu
Maike Sonnewald
Colin Thackray
Skylar Callis
AI4CE
51
0
0
25 Jun 2024
Fine-Grained Domain Generalization with Feature Structuralization
Fine-Grained Domain Generalization with Feature Structuralization
Wenlong Yu
Dongyue Chen
Qilong Wang
Qinghua Hu
36
0
0
13 Jun 2024
Applications of Explainable artificial intelligence in Earth system
  science
Applications of Explainable artificial intelligence in Earth system science
Feini Huang
Shijie Jiang
Lu Li
Yongkun Zhang
Ye Zhang
Ruqing Zhang
Qingliang Li
Danxi Li
Wei Shangguan
Yongjiu Dai
38
2
0
12 Jun 2024
CoLa-DCE -- Concept-guided Latent Diffusion Counterfactual Explanations
CoLa-DCE -- Concept-guided Latent Diffusion Counterfactual Explanations
Franz Motzkus
Christian Hellert
Ute Schmid
DiffM
40
3
0
03 Jun 2024
Locally Testing Model Detections for Semantic Global Concepts
Locally Testing Model Detections for Semantic Global Concepts
Franz Motzkus
Georgii Mikriukov
Christian Hellert
Ute Schmid
40
2
0
27 May 2024
Data Science Principles for Interpretable and Explainable AI
Data Science Principles for Interpretable and Explainable AI
Kris Sankaran
FaML
40
0
0
17 May 2024
When a Relation Tells More Than a Concept: Exploring and Evaluating
  Classifier Decisions with CoReX
When a Relation Tells More Than a Concept: Exploring and Evaluating Classifier Decisions with CoReX
Bettina Finzel
Patrick Hilme
Johannes Rabold
Ute Schmid
43
1
0
02 May 2024
Mapping the Potential of Explainable AI for Fairness Along the AI
  Lifecycle
Mapping the Potential of Explainable AI for Fairness Along the AI Lifecycle
Luca Deck
Astrid Schomacker
Timo Speith
Jakob Schöffer
Lena Kästner
Niklas Kühl
41
4
0
29 Apr 2024
Position: Do Not Explain Vision Models Without Context
Position: Do Not Explain Vision Models Without Context
Paulina Tomaszewska
Przemysław Biecek
32
1
0
28 Apr 2024
Explainable concept mappings of MRI: Revealing the mechanisms underlying
  deep learning-based brain disease classification
Explainable concept mappings of MRI: Revealing the mechanisms underlying deep learning-based brain disease classification
C. Tinauer
A. Damulina
Maximilian Sackl
M. Soellradl
Reduan Achtibat
...
Sebastian Lapuschkin
Reinhold Schmidt
Stefan Ropele
Wojciech Samek
C. Langkammer
FAtt
27
1
0
16 Apr 2024
Contrastive Pretraining for Visual Concept Explanations of Socioeconomic
  Outcomes
Contrastive Pretraining for Visual Concept Explanations of Socioeconomic Outcomes
Ivica Obadic
Alex Levering
Lars Pennig
Dario Augusto Borges Oliveira
Diego Marcos
Xiaoxiang Zhu
49
0
0
15 Apr 2024
Reactive Model Correction: Mitigating Harm to Task-Relevant Features via
  Conditional Bias Suppression
Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression
Dilyara Bareeva
Maximilian Dreyer
Frederik Pahde
Wojciech Samek
Sebastian Lapuschkin
KELM
67
1
0
15 Apr 2024
How explainable AI affects human performance: A systematic review of the
  behavioural consequences of saliency maps
How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency maps
Romy Müller
HAI
45
6
0
03 Apr 2024
Towards Explaining Hypercomplex Neural Networks
Towards Explaining Hypercomplex Neural Networks
Eleonora Lopez
Eleonora Grassucci
D. Capriotti
Danilo Comminiello
40
3
0
26 Mar 2024
Revealing Vulnerabilities of Neural Networks in Parameter Learning and
  Defense Against Explanation-Aware Backdoors
Revealing Vulnerabilities of Neural Networks in Parameter Learning and Defense Against Explanation-Aware Backdoors
Md Abdul Kadir
G. Addluri
Daniel Sonntag
AAML
44
0
0
25 Mar 2024
Forward Learning for Gradient-based Black-box Saliency Map Generation
Forward Learning for Gradient-based Black-box Saliency Map Generation
Zeliang Zhang
Mingqian Feng
Jinyang Jiang
Rongyi Zhu
Yijie Peng
Chenliang Xu
FAtt
32
2
0
22 Mar 2024
WWW: A Unified Framework for Explaining What, Where and Why of Neural
  Networks by Interpretation of Neuron Concepts
WWW: A Unified Framework for Explaining What, Where and Why of Neural Networks by Interpretation of Neuron Concepts
Yong Hyun Ahn
Hyeon Bae Kim
Seong Tae Kim
34
4
0
29 Feb 2024
The European Commitment to Human-Centered Technology: The Integral Role
  of HCI in the EU AI Act's Success
The European Commitment to Human-Centered Technology: The Integral Role of HCI in the EU AI Act's Success
André Calero Valdez
Moreen Heine
Thomas Franke
Nicole Jochems
Hans-Christian Jetter
Tim Schrills
21
0
0
22 Feb 2024
AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for
  Transformers
AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers
Reduan Achtibat
Sayed Mohammad Vakilzadeh Hatefi
Maximilian Dreyer
Aakriti Jain
Thomas Wiegand
Sebastian Lapuschkin
Wojciech Samek
28
25
0
08 Feb 2024
12
Next