ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.05007
  4. Cited By
Example-based Explanations for Random Forests using Machine Unlearning

Example-based Explanations for Random Forests using Machine Unlearning

7 February 2024
Tanmay Surve
Romila Pradhan
    FaMLFAtt
ArXiv (abs)PDFHTML

Papers citing "Example-based Explanations for Random Forests using Machine Unlearning"

14 / 14 papers shown
Title
CAVE-Net: Classifying Abnormalities in Video Capsule Endoscopy
CAVE-Net: Classifying Abnormalities in Video Capsule Endoscopy
Ishita Harish
Saurav Mishra
Neha Bhadoria
Rithik Kumar
Madhav Arora
Syed Rameem Zahra
Ankur Gupta
83
2
0
31 Dec 2024
Interpretable Data-Based Explanations for Fairness Debugging
Interpretable Data-Based Explanations for Fairness Debugging
Romila Pradhan
Jiongli Zhu
Boris Glavic
Babak Salimi
63
54
0
17 Dec 2021
Adaptive Machine Unlearning
Adaptive Machine Unlearning
Varun Gupta
Christopher Jung
Seth Neel
Aaron Roth
Saeed Sharifi-Malvajerdi
Chris Waites
MU
64
183
0
08 Jun 2021
Explaining Black-Box Algorithms Using Probabilistic Contrastive
  Counterfactuals
Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals
Sainyam Galhotra
Romila Pradhan
Babak Salimi
CML
65
107
0
22 Mar 2021
Complaint-driven Training Data Debugging for Query 2.0
Complaint-driven Training Data Debugging for Query 2.0
Weiyuan Wu
Lampros Flokas
Eugene Wu
Jiannan Wang
62
44
0
12 Apr 2020
On Second-Order Group Influence Functions for Black-Box Predictions
On Second-Order Group Influence Functions for Black-Box Predictions
S. Basu
Xuchen You
Soheil Feizi
TDI
96
71
0
01 Nov 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
127
6,293
0
22 Oct 2019
Path-Specific Counterfactual Fairness
Path-Specific Counterfactual Fairness
Silvia Chiappa
Thomas P. S. Gillam
CMLFaML
80
340
0
22 Feb 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
131
3,967
0
06 Feb 2018
Counterfactual Explanations without Opening the Black Box: Automated
  Decisions and the GDPR
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
117
2,360
0
01 Nov 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,002
0
22 May 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
213
2,899
0
14 Mar 2017
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
302
2,120
0
24 Oct 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,027
0
16 Feb 2016
1