ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.12950
  4. Cited By
Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI
v1v2 (latest)

Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI

17 July 2024
Qi Huang
Emanuele Mezzi
Osman Mutlu
Miltiadis Kofinas
Vidya Prasad
Shadnan Azwad Khan
Elena Ranguelova
Niki van Stein
ArXiv (abs)PDFHTML

Papers citing "Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI"

20 / 20 papers shown
Title
The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image
  Classifiers
The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image Classifiers
Meike Nauta
Christin Seifert
80
11
0
26 Jul 2023
Why Should I Choose You? AutoXAI: A Framework for Selecting and Tuning
  eXplainable AI Solutions
Why Should I Choose You? AutoXAI: A Framework for Selecting and Tuning eXplainable AI Solutions
Robin Cugny
Julien Aligon
Max Chevalier
G. Roman-Jimenez
O. Teste
48
14
0
06 Oct 2022
SAFARI: Versatile and Efficient Evaluations for Robustness of
  Interpretability
SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability
Wei Huang
Xingyu Zhao
Gao Jin
Xiaowei Huang
AAML
59
30
0
19 Aug 2022
OmniXAI: A Library for Explainable AI
OmniXAI: A Library for Explainable AI
Wenzhuo Yang
Hung Le
Tanmay Laud
Silvio Savarese
Guosheng Lin
SyDa
37
40
0
01 Jun 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural
  Network Explanations and Beyond
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAIELM
47
175
0
14 Feb 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELMXAI
100
409
0
20 Jan 2022
High-Resolution Image Synthesis with Latent Diffusion Models
High-Resolution Image Synthesis with Latent Diffusion Models
Robin Rombach
A. Blattmann
Dominik Lorenz
Patrick Esser
Bjorn Ommer
3DV
437
15,515
0
20 Dec 2021
Interactive Analysis of CNN Robustness
Interactive Analysis of CNN Robustness
Stefan Sietzen
Mathias Lechner
Judy Borowski
Ramin Hasani
Manuela Waldner
AAML
62
10
0
14 Oct 2021
Synthetic Benchmarks for Scientific Research in Explainable Machine
  Learning
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
Yang Liu
Sujay Khandagale
Colin White
Willie Neiswanger
105
66
0
23 Jun 2021
An Experimental Study of Semantic Continuity for Deep Learning Models
An Experimental Study of Semantic Continuity for Deep Learning Models
Shangxi Wu
Dongyuan Lu
Xian Zhao
Lizhang Chen
Jitao Sang
68
2
0
19 Nov 2020
Captum: A unified and generic model interpretability library for PyTorch
Captum: A unified and generic model interpretability library for PyTorch
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
...
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
FAtt
133
843
0
16 Sep 2020
InterFaceGAN: Interpreting the Disentangled Face Representation Learned
  by GANs
InterFaceGAN: Interpreting the Disentangled Face Representation Learned by GANs
Yujun Shen
Ceyuan Yang
Xiaoou Tang
Bolei Zhou
GANCVBM
65
599
0
18 May 2020
Benchmarking Attribution Methods with Relative Feature Importance
Benchmarking Attribution Methods with Relative Feature Importance
Mengjiao Yang
Been Kim
FAttXAI
69
141
0
23 Jul 2019
RISE: Randomized Input Sampling for Explanation of Black-box Models
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
181
1,171
0
19 Jun 2018
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,939
0
22 May 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
312
20,023
0
07 Oct 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
16,990
0
16 Feb 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,020
0
10 Dec 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.9K
150,115
0
22 Dec 2014
Measuring and testing dependence by correlation of distances
Measuring and testing dependence by correlation of distances
G. Székely
Maria L. Rizzo
N. K. Bakirov
274
2,599
0
28 Mar 2008
1