Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2303.01213
Cited By
DSD
2
^2
2
: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?
2 March 2023
Victor Quétu
Enzo Tartaglione
Re-assign community
ArXiv
PDF
HTML
Papers citing
"DSD$^2$: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?"
9 / 9 papers shown
Title
Efficient Adaptation of Deep Neural Networks for Semantic Segmentation in Space Applications
Leonardo Olivi
Edoardo Santero Mormile
Enzo Tartaglione
SSeg
35
0
0
22 Apr 2025
LaCoOT: Layer Collapse through Optimal Transport
Victor Quétu
Nour Hezbri
Enzo Tartaglione
31
0
0
13 Jun 2024
The Simpler The Better: An Entropy-Based Importance Metric To Reduce Neural Networks' Depth
Victor Quétu
Zhu Liao
Enzo Tartaglione
41
4
0
27 Apr 2024
NEPENTHE: Entropy-Based Pruning as a Neural Network Depth's Reducer
Zhu Liao
Victor Quétu
Van-Tam Nguyen
Enzo Tartaglione
41
2
0
24 Apr 2024
Understanding the Role of Optimization in Double Descent
Chris Liu
Jeffrey Flanigan
32
0
0
06 Dec 2023
The Quest of Finding the Antidote to Sparse Double Descent
Victor Quétu
Marta Milovanović
28
0
0
31 Aug 2023
Sparse Double Descent in Vision Transformers: real or phantom threat?
Victor Quétu
Marta Milovanović
Enzo Tartaglione
16
2
0
26 Jul 2023
Prune Your Model Before Distill It
Jinhyuk Park
Albert No
VLM
43
27
0
30 Sep 2021
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
191
1,027
0
06 Mar 2020
1