ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.04583
  4. Cited By
Unveiling the Power of Sparse Neural Networks for Feature Selection

Unveiling the Power of Sparse Neural Networks for Feature Selection

8 August 2024
Zahra Atashgahi
Tennison Liu
Mykola Pechenizkiy
Raymond N. J. Veldhuis
Decebal Constantin Mocanu
M. Schaar
ArXivPDFHTML

Papers citing "Unveiling the Power of Sparse Neural Networks for Feature Selection"

19 / 19 papers shown
Title
Where to Pay Attention in Sparse Training for Feature Selection?
Where to Pay Attention in Sparse Training for Feature Selection?
Ghada Sokar
Zahra Atashgahi
Mykola Pechenizkiy
Decebal Constantin Mocanu
52
19
0
26 Nov 2022
Local Contrastive Feature learning for Tabular Data
Local Contrastive Feature learning for Tabular Data
Zhabiz Gharibshah
Xingquan Zhu
SSL
20
8
0
19 Nov 2022
Composite Feature Selection using Deep Ensembles
Composite Feature Selection using Deep Ensembles
F. Imrie
Alexander Norcliffe
Pietro Lio
M. Schaar
52
12
0
01 Nov 2022
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the
  Edge
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Geng Yuan
Xiaolong Ma
Wei Niu
Zhengang Li
Zhenglun Kong
...
Minghai Qin
Bin Ren
Yanzhi Wang
Sijia Liu
Xue Lin
54
93
0
26 Oct 2021
Deep Neural Networks and Tabular Data: A Survey
Deep Neural Networks and Tabular Data: A Survey
V. Borisov
Tobias Leemann
Kathrin Seßler
Johannes Haug
Martin Pawelczyk
Gjergji Kasneci
LMTD
98
683
0
05 Oct 2021
Dynamic Sparse Training for Deep Reinforcement Learning
Dynamic Sparse Training for Deep Reinforcement Learning
Ghada Sokar
Elena Mocanu
Decebal Constantin Mocanu
Mykola Pechenizkiy
Peter Stone
59
57
0
08 Jun 2021
Top-KAST: Top-K Always Sparse Training
Top-KAST: Top-K Always Sparse Training
Siddhant M. Jayakumar
Razvan Pascanu
Jack W. Rae
Simon Osindero
Erich Elsen
138
97
0
07 Jun 2021
SAINT: Improved Neural Networks for Tabular Data via Row Attention and
  Contrastive Pre-Training
SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training
Gowthami Somepalli
Micah Goldblum
Avi Schwarzschild
C. Bayan Bruss
Tom Goldstein
LMTD
85
325
0
02 Jun 2021
Sparse Training Theory for Scalable and Efficient Agents
Sparse Training Theory for Scalable and Efficient Agents
Decebal Constantin Mocanu
Elena Mocanu
T. Pinto
Selima Curci
Phuong H. Nguyen
M. Gibescu
D. Ernst
Z. Vale
62
17
0
02 Mar 2021
Sparsity in Deep Learning: Pruning and growth for efficient inference
  and training in neural networks
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
305
717
0
31 Jan 2021
Feature Importance Ranking for Deep Learning
Feature Importance Ranking for Deep Learning
Maksymilian Wojtas
Ke Chen
172
117
0
18 Oct 2020
The Hardware Lottery
The Hardware Lottery
Sara Hooker
63
211
0
14 Sep 2020
PyTorch: An Imperative Style, High-Performance Deep Learning Library
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
408
42,393
0
03 Dec 2019
Rigging the Lottery: Making All Tickets Winners
Rigging the Lottery: Making All Tickets Winners
Utku Evci
Trevor Gale
Jacob Menick
Pablo Samuel Castro
Erich Elsen
184
600
0
25 Nov 2019
TabNet: Attentive Interpretable Tabular Learning
TabNet: Attentive Interpretable Tabular Learning
Sercan O. Arik
Tomas Pfister
LMTD
165
1,345
0
20 Aug 2019
AutoML: A Survey of the State-of-the-Art
AutoML: A Survey of the State-of-the-Art
Xin He
Kaiyong Zhao
Xiaowen Chu
109
1,457
0
02 Aug 2019
LassoNet: A Neural Network with Feature Sparsity
LassoNet: A Neural Network with Feature Sparsity
Ismael Lemhadri
Feng Ruan
L. Abraham
Robert Tibshirani
78
129
0
29 Jul 2019
Scalable Training of Artificial Neural Networks with Adaptive Sparse
  Connectivity inspired by Network Science
Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science
Decebal Constantin Mocanu
Elena Mocanu
Peter Stone
Phuong H. Nguyen
M. Gibescu
A. Liotta
170
628
0
15 Jul 2017
Improved Training of Wasserstein GANs
Improved Training of Wasserstein GANs
Ishaan Gulrajani
Faruk Ahmed
Martín Arjovsky
Vincent Dumoulin
Aaron Courville
GAN
191
9,545
0
31 Mar 2017
1