ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.05431
  4. Cited By
Building and Interpreting Deep Similarity Models

Building and Interpreting Deep Similarity Models

11 March 2020
Oliver Eberle
Jochen Büttner
Florian Kräutli
K. Müller
Matteo Valleriani
G. Montavon
ArXivPDFHTML

Papers citing "Building and Interpreting Deep Similarity Models"

12 / 12 papers shown
Title
ReSi: A Comprehensive Benchmark for Representational Similarity Measures
ReSi: A Comprehensive Benchmark for Representational Similarity Measures
Max Klabunde
Tassilo Wald
Tobias Schumacher
Klaus H. Maier-Hein
Markus Strohmaier
Adriana Iamnitchi
AI4TS
VLM
83
5
0
13 Mar 2025
The Clever Hans Effect in Unsupervised Learning
The Clever Hans Effect in Unsupervised Learning
Jacob R. Kauffmann
Jonas Dippel
Lukas Ruff
Wojciech Samek
Klaus-Robert Müller
G. Montavon
SSL
CML
HAI
46
1
0
15 Aug 2024
MambaLRP: Explaining Selective State Space Sequence Models
MambaLRP: Explaining Selective State Space Sequence Models
F. Jafari
G. Montavon
Klaus-Robert Müller
Oliver Eberle
Mamba
70
9
0
11 Jun 2024
Explaining Text Similarity in Transformer Models
Explaining Text Similarity in Transformer Models
Alexandros Vasileiou
Oliver Eberle
48
7
0
10 May 2024
Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
Lorenz Linhardt
Klaus-Robert Muller
G. Montavon
AAML
34
7
0
12 Apr 2023
Disentangled Explanations of Neural Network Predictions by Finding
  Relevant Subspaces
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
61
18
0
30 Dec 2022
Label-Free Explainability for Unsupervised Models
Label-Free Explainability for Unsupervised Models
Jonathan Crabbé
M. Schaar
FAtt
MILM
24
22
0
03 Mar 2022
Toward Explainable AI for Regression Models
Toward Explainable AI for Regression Models
S. Letzgus
Patrick Wagner
Jonas Lederer
Wojciech Samek
Klaus-Robert Muller
G. Montavon
XAI
38
63
0
21 Dec 2021
Towards Robust Explanations for Deep Neural Networks
Towards Robust Explanations for Deep Neural Networks
Ann-Kathrin Dombrowski
Christopher J. Anders
K. Müller
Pan Kessel
FAtt
35
63
0
18 Dec 2020
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Thomas Schnake
Oliver Eberle
Jonas Lederer
Shinichi Nakajima
Kristof T. Schütt
Klaus-Robert Muller
G. Montavon
34
217
0
05 Jun 2020
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,242
0
24 Jun 2017
A Survey on Deep Learning in Medical Image Analysis
A Survey on Deep Learning in Medical Image Analysis
G. Litjens
Thijs Kooi
B. Bejnordi
A. Setio
F. Ciompi
Mohsen Ghafoorian
Jeroen van der Laak
Bram van Ginneken
C. I. Sánchez
OOD
376
10,639
0
19 Feb 2017
1