ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.13363
  4. Cited By
Provable Memorization via Deep Neural Networks using Sub-linear
  Parameters

Provable Memorization via Deep Neural Networks using Sub-linear Parameters

26 October 2020
Sejun Park
Jaeho Lee
Chulhee Yun
Jinwoo Shin
    FedML
    MDE
ArXivPDFHTML

Papers citing "Provable Memorization via Deep Neural Networks using Sub-linear Parameters"

10 / 10 papers shown
Title
Deep Kalman Filters Can Filter
Deep Kalman Filters Can Filter
Blanka Hovart
Anastasis Kratsios
Yannick Limmer
Xuwei Yang
45
1
0
31 Dec 2024
On the Complexity of Neural Computation in Superposition
On the Complexity of Neural Computation in Superposition
Micah Adler
Nir Shavit
115
3
0
05 Sep 2024
Minimum width for universal approximation using ReLU networks on compact
  domain
Minimum width for universal approximation using ReLU networks on compact domain
Namjun Kim
Chanho Min
Sejun Park
VLM
29
10
0
19 Sep 2023
Are Transformers with One Layer Self-Attention Using Low-Rank Weight
  Matrices Universal Approximators?
Are Transformers with One Layer Self-Attention Using Low-Rank Weight Matrices Universal Approximators?
T. Kajitsuka
Issei Sato
29
16
0
26 Jul 2023
Memorization Capacity of Multi-Head Attention in Transformers
Memorization Capacity of Multi-Head Attention in Transformers
Sadegh Mahdavi
Renjie Liao
Christos Thrampoulidis
26
22
0
03 Jun 2023
Memorization Capacity of Neural Networks with Conditional Computation
Memorization Capacity of Neural Networks with Conditional Computation
Erdem Koyuncu
30
4
0
20 Mar 2023
When Expressivity Meets Trainability: Fewer than $n$ Neurons Can Work
When Expressivity Meets Trainability: Fewer than nnn Neurons Can Work
Jiawei Zhang
Yushun Zhang
Mingyi Hong
Ruoyu Sun
Z. Luo
26
10
0
21 Oct 2022
Designing Universal Causal Deep Learning Models: The Geometric
  (Hyper)Transformer
Designing Universal Causal Deep Learning Models: The Geometric (Hyper)Transformer
Beatrice Acciaio
Anastasis Kratsios
G. Pammer
OOD
39
20
0
31 Jan 2022
The Interpolation Phase Transition in Neural Networks: Memorization and
  Generalization under Lazy Training
The Interpolation Phase Transition in Neural Networks: Memorization and Generalization under Lazy Training
Andrea Montanari
Yiqiao Zhong
47
95
0
25 Jul 2020
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
136
602
0
14 Feb 2016
1