ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.00194
  4. Cited By
Solvable Model for Inheriting the Regularization through Knowledge
  Distillation
v1v2 (latest)

Solvable Model for Inheriting the Regularization through Knowledge Distillation

1 December 2020
Luca Saglietti
Lenka Zdeborová
ArXiv (abs)PDFHTML

Papers citing "Solvable Model for Inheriting the Regularization through Knowledge Distillation"

11 / 11 papers shown
Title
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
Kaito Takanami
Takashi Takahashi
Ayaka Sakata
144
1
0
27 Jan 2025
CURing Large Models: Compression via CUR Decomposition
CURing Large Models: Compression via CUR Decomposition
Sanghyeon Park
Soo-Mook Moon
89
1
0
08 Jan 2025
Features are fate: a theory of transfer learning in high-dimensional
  regression
Features are fate: a theory of transfer learning in high-dimensional regression
Javan Tahir
Surya Ganguli
Grant M. Rotskoff
66
2
0
10 Oct 2024
Asymptotic Dynamics of Alternating Minimization for Bilinear Regression
Asymptotic Dynamics of Alternating Minimization for Bilinear Regression
Koki Okajima
Takashi Takahashi
49
3
0
07 Feb 2024
Connecting NTK and NNGP: A Unified Theoretical Framework for Wide Neural Network Learning Dynamics
Connecting NTK and NNGP: A Unified Theoretical Framework for Wide Neural Network Learning Dynamics
Yehonatan Avidan
Qianyi Li
H. Sompolinsky
133
8
0
08 Sep 2023
The Quest of Finding the Antidote to Sparse Double Descent
The Quest of Finding the Antidote to Sparse Double Descent
Victor Quétu
Marta Milovanović
89
0
0
31 Aug 2023
DSD$^2$: Can We Dodge Sparse Double Descent and Compress the Neural
  Network Worry-Free?
DSD2^22: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?
Victor Quétu
Enzo Tartaglione
85
7
0
02 Mar 2023
Going Further With Winograd Convolutions: Tap-Wise Quantization for
  Efficient Inference on 4x4 Tile
Going Further With Winograd Convolutions: Tap-Wise Quantization for Efficient Inference on 4x4 Tile
Renzo Andri
Beatrice Bussolino
A. Cipolletta
Lukas Cavigelli
Zhe Wang
MQ
53
14
0
26 Sep 2022
An Analytical Theory of Curriculum Learning in Teacher-Student Networks
An Analytical Theory of Curriculum Learning in Teacher-Student Networks
Luca Saglietti
Stefano Sarao Mannelli
Andrew M. Saxe
55
26
0
15 Jun 2021
Phase Transitions in Transfer Learning for High-Dimensional Perceptrons
Phase Transitions in Transfer Learning for High-Dimensional Perceptrons
Oussama Dhifallah
Yue M. Lu
91
20
0
06 Jan 2021
A Concentration of Measure Framework to study convex problems and other
  implicit formulation problems in machine learning
A Concentration of Measure Framework to study convex problems and other implicit formulation problems in machine learning
Cosme Louart
19
0
0
19 Oct 2020
1