ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.08887
  4. Cited By
The Sparse Manifold Transform

The Sparse Manifold Transform

23 June 2018
Yubei Chen
Dylan M. Paiton
Bruno A. Olshausen
    MedIm
ArXivPDFHTML

Papers citing "The Sparse Manifold Transform"

9 / 9 papers shown
Title
Identifying Interpretable Visual Features in Artificial and Biological
  Neural Systems
Identifying Interpretable Visual Features in Artificial and Biological Neural Systems
David A. Klindt
Sophia Sanborn
Francisco Acosta
Frédéric Poitevin
Nina Miolane
MILM
FAtt
44
7
0
17 Oct 2023
URLOST: Unsupervised Representation Learning without Stationarity or Topology
URLOST: Unsupervised Representation Learning without Stationarity or Topology
Zeyu Yun
Juexiao Zhang
Bruno A. Olshausen
Yann LeCun
31
0
0
06 Oct 2023
A polar prediction model for learning to represent visual
  transformations
A polar prediction model for learning to represent visual transformations
P. Fiquet
Eero P. Simoncelli
36
4
0
06 Mar 2023
Minimalistic Unsupervised Learning with the Sparse Manifold Transform
Minimalistic Unsupervised Learning with the Sparse Manifold Transform
Yubei Chen
Zeyu Yun
Y. Ma
Bruno A. Olshausen
Yann LeCun
52
8
0
30 Sep 2022
Context-sensitive neocortical neurons transform the effectiveness and
  efficiency of neural information processing
Context-sensitive neocortical neurons transform the effectiveness and efficiency of neural information processing
Ahsan Adeel
Mario Franco
Mohsin Raza
K. Ahmed
23
9
0
15 Jul 2022
Stacked unsupervised learning with a network architecture found by
  supervised meta-learning
Stacked unsupervised learning with a network architecture found by supervised meta-learning
Kyle L. Luther
H. S. Seung
SSL
22
0
0
06 Jun 2022
Two Sparsities Are Better Than One: Unlocking the Performance Benefits
  of Sparse-Sparse Networks
Two Sparsities Are Better Than One: Unlocking the Performance Benefits of Sparse-Sparse Networks
Kevin Lee Hunter
Lawrence Spracklen
Subutai Ahmad
21
20
0
27 Dec 2021
How Can We Be So Dense? The Benefits of Using Highly Sparse
  Representations
How Can We Be So Dense? The Benefits of Using Highly Sparse Representations
Subutai Ahmad
Luiz Scheinkman
25
96
0
27 Mar 2019
Geometric deep learning: going beyond Euclidean data
Geometric deep learning: going beyond Euclidean data
M. Bronstein
Joan Bruna
Yann LeCun
Arthur Szlam
P. Vandergheynst
GNN
253
3,239
0
24 Nov 2016
1