ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1505.05561
  4. Cited By
Why Regularized Auto-Encoders learn Sparse Representation?
v1v2v3v4v5 (latest)

Why Regularized Auto-Encoders learn Sparse Representation?

21 May 2015
Devansh Arpit
Yingbo Zhou
H. Ngo
V. Govindaraju
ArXiv (abs)PDFHTML

Papers citing "Why Regularized Auto-Encoders learn Sparse Representation?"

23 / 23 papers shown
Title
Leveraging Decoder Architectures for Learned Sparse Retrieval
Leveraging Decoder Architectures for Learned Sparse Retrieval
Jingfen Qiao
Thong Nguyen
Evangelos Kanoulas
Andrew Yates
77
0
0
25 Apr 2025
Mirror, Mirror of the Flow: How Does Regularization Shape Implicit Bias?
Mirror, Mirror of the Flow: How Does Regularization Shape Implicit Bias?
Tom Jacobs
Chao Zhou
R. Burkholz
OffRLAI4CE
76
1
0
17 Apr 2025
Shape Modeling of Longitudinal Medical Images: From Diffeomorphic Metric Mapping to Deep Learning
Shape Modeling of Longitudinal Medical Images: From Diffeomorphic Metric Mapping to Deep Learning
Edwin Tay
Nazli Tümer
Amir A. Zadpoor
MedIm
262
0
0
27 Mar 2025
Measuring Progress in Dictionary Learning for Language Model
  Interpretability with Board Game Models
Measuring Progress in Dictionary Learning for Language Model Interpretability with Board Game Models
Adam Karvonen
Benjamin Wright
Can Rager
Rico Angell
Jannik Brinkmann
Logan Smith
C. M. Verdun
David Bau
Samuel Marks
101
31
0
31 Jul 2024
Learning Sparsity of Representations with Discrete Latent Variables
Learning Sparsity of Representations with Discrete Latent Variables
Zhao Xu
Daniel Oñoro-Rubio
G. Serra
Mathias Niepert
41
0
0
03 Apr 2023
Universal Solutions of Feedforward ReLU Networks for Interpolations
Universal Solutions of Feedforward ReLU Networks for Interpolations
Changcun Huang
63
2
0
16 Aug 2022
Semantic Autoencoder and Its Potential Usage for Adversarial Attack
Semantic Autoencoder and Its Potential Usage for Adversarial Attack
Yurui Ming
Cuihuan Du
Chin-Teng Lin
AAMLGANDRL
36
0
0
31 May 2022
Enhancing Unsupervised Anomaly Detection with Score-Guided Network
Enhancing Unsupervised Anomaly Detection with Score-Guided Network
Zongyuan Huang
Baohua Zhang
Guoqiang Hu
Longyuan Li
Yanyan Xu
Yaohui Jin
63
16
0
10 Sep 2021
Advances in Electron Microscopy with Deep Learning
Advances in Electron Microscopy with Deep Learning
Jeffrey M. Ede
105
3
0
04 Jan 2021
A Unifying Review of Deep and Shallow Anomaly Detection
A Unifying Review of Deep and Shallow Anomaly Detection
Lukas Ruff
Jacob R. Kauffmann
Robert A. Vandermeulen
G. Montavon
Wojciech Samek
Marius Kloft
Thomas G. Dietterich
Klaus-Robert Muller
UQCV
150
806
0
24 Sep 2020
Review: Deep Learning in Electron Microscopy
Review: Deep Learning in Electron Microscopy
Jeffrey M. Ede
197
80
0
17 Sep 2020
Pseudo-Rehearsal for Continual Learning with Normalizing Flows
Pseudo-Rehearsal for Continual Learning with Normalizing Flows
Jary Pomponi
Simone Scardapane
A. Uncini
48
15
0
05 Jul 2020
Minimizing FLOPs to Learn Efficient Sparse Representations
Minimizing FLOPs to Learn Efficient Sparse Representations
Biswajit Paria
Chih-Kuan Yeh
Ian En-Hsu Yen
N. Xu
Pradeep Ravikumar
Barnabás Póczós
82
69
0
12 Apr 2020
ML4Chem: A Machine Learning Package for Chemistry and Materials Science
ML4Chem: A Machine Learning Package for Chemistry and Materials Science
Muammar El Khatib
W. A. Jong
VLM
30
6
0
02 Mar 2020
SoftAdapt: Techniques for Adaptive Loss Weighting of Neural Networks
  with Multi-Part Loss Functions
SoftAdapt: Techniques for Adaptive Loss Weighting of Neural Networks with Multi-Part Loss Functions
A. Heydari
Craig Thompson
A. Mehmood
64
63
0
27 Dec 2019
Walking the Tightrope: An Investigation of the Convolutional Autoencoder
  Bottleneck
Walking the Tightrope: An Investigation of the Convolutional Autoencoder Bottleneck
I. Manakov
Markus Rohm
Volker Tresp
43
9
0
18 Nov 2019
The Utility of Sparse Representations for Control in Reinforcement
  Learning
The Utility of Sparse Representations for Control in Reinforcement Learning
Vincent Liu
Raksha Kumaraswamy
Lei Le
Martha White
58
62
0
15 Nov 2018
Convergence guarantees for RMSProp and ADAM in non-convex optimization
  and an empirical comparison to Nesterov acceleration
Convergence guarantees for RMSProp and ADAM in non-convex optimization and an empirical comparison to Nesterov acceleration
Soham De
Anirbit Mukherjee
Enayat Ullah
77
101
0
18 Jul 2018
Autoencoders Learn Generative Linear Models
Autoencoders Learn Generative Linear Models
THANH VAN NGUYEN
Raymond K. W. Wong
Chinmay Hegde
DRL
51
4
0
02 Jun 2018
Understanding Autoencoders with Information Theoretic Concepts
Understanding Autoencoders with Information Theoretic Concepts
Shujian Yu
José C. Príncipe
AI4CE
132
135
0
30 Mar 2018
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories,
  Tools and Challenges for the Community
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
John E. Ball
Derek T. Anderson
Chee Seng Chan
78
521
0
01 Sep 2017
Sparse Coding and Autoencoders
Sparse Coding and Autoencoders
Akshay Rangamani
Anirbit Mukherjee
A. Basu
T. Ganapathi
Ashish Arora
S. Chin
T. Tran
112
20
0
12 Aug 2017
On Optimality Conditions for Auto-Encoder Signal Recovery
On Optimality Conditions for Auto-Encoder Signal Recovery
Devansh Arpit
Yingbo Zhou
H. Ngo
N. Napp
V. Govindaraju
23
1
0
23 May 2016
1