ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.11778
  4. Cited By
Local Feature Selection without Label or Feature Leakage for
  Interpretable Machine Learning Predictions

Local Feature Selection without Label or Feature Leakage for Interpretable Machine Learning Predictions

16 July 2024
Harrie Oosterhuis
Lijun Lyu
Avishek Anand
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Local Feature Selection without Label or Feature Leakage for Interpretable Machine Learning Predictions"

20 / 20 papers shown
Title
Learning to Maximize Mutual Information for Dynamic Feature Selection
Learning to Maximize Mutual Information for Dynamic Feature Selection
Ian Covert
Wei Qiu
Mingyu Lu
Nayoon Kim
Nathan White
Su-In Lee
46
29
0
02 Jan 2023
Can Rationalization Improve Robustness?
Can Rationalization Improve Robustness?
Howard Chen
Jacqueline He
Karthik Narasimhan
Danqi Chen
AAML
76
40
0
25 Apr 2022
Have We Learned to Explain?: How Interpretability Methods Can Learn to
  Encode Predictions in their Interpretations
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations
N. Jethani
Mukund Sudarshan
Yindalon Aphinyanagphongs
Rajesh Ranganath
FAtt
133
70
0
02 Mar 2021
Explain and Predict, and then Predict Again
Explain and Predict, and then Predict Again
Zijian Zhang
Koustav Rudra
Avishek Anand
FAtt
63
51
0
11 Jan 2021
Active Feature Acquisition with Generative Surrogate Models
Active Feature Acquisition with Generative Surrogate Models
Yang Li
Junier B. Oliva
RALMTPM
49
37
0
06 Oct 2020
Aligning Faithful Interpretations with their Social Attribution
Aligning Faithful Interpretations with their Social Attribution
Alon Jacovi
Yoav Goldberg
57
106
0
01 Jun 2020
An Information Bottleneck Approach for Controlling Conciseness in
  Rationale Extraction
An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction
Bhargavi Paranjape
Mandar Joshi
John Thickstun
Hannaneh Hajishirzi
Luke Zettlemoyer
57
101
0
01 May 2020
Sanity Checks for Saliency Metrics
Sanity Checks for Saliency Metrics
Richard J. Tomsett
Daniel Harborne
Supriyo Chakraborty
Prudhvi K. Gurram
Alun D. Preece
XAI
100
170
0
29 Nov 2019
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
Patrick Schwab
W. Karlen
FAttCML
120
209
0
27 Oct 2019
TabNet: Attentive Interpretable Tabular Learning
TabNet: Attentive Interpretable Tabular Learning
Sercan O. Arik
Tomas Pfister
LMTD
188
1,353
0
20 Aug 2019
LassoNet: A Neural Network with Feature Sparsity
LassoNet: A Neural Network with Feature Sparsity
Ismael Lemhadri
Feng Ruan
L. Abraham
Robert Tibshirani
80
129
0
29 Jul 2019
Interpretable Neural Predictions with Differentiable Binary Variables
Interpretable Neural Predictions with Differentiable Binary Variables
Jasmijn Bastings
Wilker Aziz
Ivan Titov
82
214
0
20 May 2019
Techniques for Interpretable Machine Learning
Techniques for Interpretable Machine Learning
Mengnan Du
Ninghao Liu
Xia Hu
FaML
82
1,091
0
31 Jul 2018
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning
  Algorithms
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao
Kashif Rasul
Roland Vollgraf
283
8,904
0
25 Aug 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,939
0
22 May 2017
Real Time Image Saliency for Black Box Classifiers
Real Time Image Saliency for Black Box Classifiers
P. Dabkowski
Y. Gal
67
591
0
22 May 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
201
3,873
0
10 Apr 2017
Categorical Reparameterization with Gumbel-Softmax
Categorical Reparameterization with Gumbel-Softmax
Eric Jang
S. Gu
Ben Poole
BDL
339
5,364
0
03 Nov 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
16,990
0
16 Feb 2016
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
312
7,308
0
20 Dec 2013
1