ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.02870
  4. Cited By
Stable and Interpretable Deep Learning for Tabular Data: Introducing
  InterpreTabNet with the Novel InterpreStability Metric

Stable and Interpretable Deep Learning for Tabular Data: Introducing InterpreTabNet with the Novel InterpreStability Metric

4 October 2023
Shiyun Wa
Xinai Lu
Minjuan Wang
ArXiv (abs)PDFHTML

Papers citing "Stable and Interpretable Deep Learning for Tabular Data: Introducing InterpreTabNet with the Novel InterpreStability Metric"

12 / 12 papers shown
Title
LADDER: Latent Boundary-guided Adversarial Training
LADDER: Latent Boundary-guided Adversarial Training
Xiaowei Zhou
Ivor W. Tsang
Jie Yin
AAML
47
7
0
08 Jun 2022
Lead-lag detection and network clustering for multivariate time series
  with an application to the US equity market
Lead-lag detection and network clustering for multivariate time series with an application to the US equity market
Stefanos Bennett
Mihai Cucuringu
Gesine Reinert
AI4TS
160
26
0
20 Jan 2022
TabNet: Attentive Interpretable Tabular Learning
TabNet: Attentive Interpretable Tabular Learning
Sercan O. Arik
Tomas Pfister
LMTD
206
1,361
0
20 Aug 2019
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical
  XAI
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI
Erico Tjoa
Cuntai Guan
XAI
117
1,452
0
17 Jul 2019
Sparse Sequence-to-Sequence Models
Sparse Sequence-to-Sequence Models
Ben Peters
Vlad Niculae
André F. T. Martins
TPM
190
214
0
14 May 2019
Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term
  Memory (LSTM) Network
Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network
A. Sherstinsky
95
3,749
0
09 Aug 2018
A review of possible effects of cognitive biases on the interpretation
  of rule-based machine learning models
A review of possible effects of cognitive biases on the interpretation of rule-based machine learning models
Tomáš Kliegr
Š. Bahník
Johannes Furnkranz
44
102
0
09 Apr 2018
An Equivalence of Fully Connected Layer and Convolutional Layer
An Equivalence of Fully Connected Layer and Convolutional Layer
Wei-Ying Ma
Jun Lu
39
65
0
04 Dec 2017
Train longer, generalize better: closing the generalization gap in large
  batch training of neural networks
Train longer, generalize better: closing the generalization gap in large batch training of neural networks
Elad Hoffer
Itay Hubara
Daniel Soudry
ODL
185
800
0
24 May 2017
Categorical Reparameterization with Gumbel-Softmax
Categorical Reparameterization with Gumbel-Softmax
Eric Jang
S. Gu
Ben Poole
BDL
363
5,388
0
03 Nov 2016
Fast and Accurate Deep Network Learning by Exponential Linear Units
  (ELUs)
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
Djork-Arné Clevert
Thomas Unterthiner
Sepp Hochreiter
307
5,536
0
23 Nov 2015
SMOTE: Synthetic Minority Over-sampling Technique
SMOTE: Synthetic Minority Over-sampling Technique
Nitesh Chawla
Kevin W. Bowyer
Lawrence Hall
W. Kegelmeyer
AI4TS
388
25,691
0
09 Jun 2011
1