ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.01413
  4. Cited By
Harnessing the Power of Explanations for Incremental Training: A
  LIME-Based Approach
v1v2 (latest)

Harnessing the Power of Explanations for Incremental Training: A LIME-Based Approach

2 November 2022
A. Mazumder
Niall Lyons
Ashutosh Pandey
Avik Santra
T. Mohsenin
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Harnessing the Power of Explanations for Incremental Training: A LIME-Based Approach"

10 / 10 papers shown
Title
Utilizing Explainable AI for improving the Performance of Neural
  Networks
Utilizing Explainable AI for improving the Performance of Neural Networks
Huawei Sun
Lorenzo Servadei
Hao Feng
Michael Stephan
Robert Wille
Avik Santra
51
7
0
07 Oct 2022
Explain to Not Forget: Defending Against Catastrophic Forgetting with
  XAI
Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Sami Ede
Serop Baghdadlian
Leander Weber
A. Nguyen
Dario Zanca
Wojciech Samek
Sebastian Lapuschkin
CLL
53
8
0
04 May 2022
Continual Learning in Human Activity Recognition: an Empirical Analysis
  of Regularization
Continual Learning in Human Activity Recognition: an Empirical Analysis of Regularization
Saurav Jha
Martin Schiemer
Juan Ye
CLL
31
13
0
06 Jul 2020
Optimal Continual Learning has Perfect Memory and is NP-hard
Optimal Continual Learning has Perfect Memory and is NP-hard
Jeremias Knoblauch
Hisham Husain
Tom Diethe
CLL
125
105
0
09 Jun 2020
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural
  Networks
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks
Mehdi Neshat
Zifan Wang
Bradley Alexander
Fan Yang
Zijian Zhang
Sirui Ding
Markus Wagner
Xia Hu
FAtt
93
1,074
0
03 Oct 2019
Gradient Episodic Memory for Continual Learning
Gradient Episodic Memory for Continual Learning
David Lopez-Paz
MarcÁurelio Ranzato
VLMCLL
127
2,725
0
26 Jun 2017
Overcoming catastrophic forgetting in neural networks
Overcoming catastrophic forgetting in neural networks
J. Kirkpatrick
Razvan Pascanu
Neil C. Rabinowitz
J. Veness
Guillaume Desjardins
...
A. Grabska-Barwinska
Demis Hassabis
Claudia Clopath
D. Kumaran
R. Hadsell
CLL
369
7,547
0
02 Dec 2016
Grad-CAM: Why did you say that?
Grad-CAM: Why did you say that?
Ramprasaath R. Selvaraju
Abhishek Das
Ramakrishna Vedantam
Michael Cogswell
Devi Parikh
Dhruv Batra
FAtt
70
475
0
22 Nov 2016
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
183
3,706
0
10 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
16,990
0
16 Feb 2016
1