ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.05801
  4. Cited By
Strategies to exploit XAI to improve classification systems

Strategies to exploit XAI to improve classification systems

9 June 2023
Andrea Apicella
Luca Di Lorenzo
Francesco Isgrò
A. Pollastro
R. Prevete
ArXiv (abs)PDFHTML

Papers citing "Strategies to exploit XAI to improve classification systems"

11 / 11 papers shown
Title
Toward the application of XAI methods in EEG-based systems
Toward the application of XAI methods in EEG-based systems
Andrea Apicella
Francesco Isgrò
A. Pollastro
R. Prevete
OODAI4TS
44
14
0
12 Oct 2022
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
Binny Mathew
Punyajoy Saha
Seid Muhie Yimam
Chris Biemann
Pawan Goyal
Animesh Mukherjee
120
578
0
18 Dec 2020
Explanation-Guided Training for Cross-Domain Few-Shot Classification
Explanation-Guided Training for Cross-Domain Few-Shot Classification
Jiamei Sun
Sebastian Lapuschkin
Wojciech Samek
Yunqing Zhao
Ngai-Man Cheung
Alexander Binder
64
89
0
17 Jul 2020
Making deep neural networks right for the right scientific reasons by
  interacting with their explanations
Making deep neural networks right for the right scientific reasons by interacting with their explanations
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
104
213
0
15 Jan 2020
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning
  Algorithms
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao
Kashif Rasul
Roland Vollgraf
283
8,920
0
25 Aug 2017
Right for the Right Reasons: Training Differentiable Models by
  Constraining their Explanations
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
A. Ross
M. C. Hughes
Finale Doshi-Velez
FAtt
126
591
0
10 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OODFAtt
191
6,015
0
04 Mar 2017
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,027
0
16 Feb 2016
Striving for Simplicity: The All Convolutional Net
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
251
4,681
0
21 Dec 2014
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
312
7,316
0
20 Dec 2013
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAttSSL
595
15,902
0
12 Nov 2013
1