ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.12794
  4. Cited By
Expand-and-Cluster: Parameter Recovery of Neural Networks

Expand-and-Cluster: Parameter Recovery of Neural Networks

25 April 2023
Flavio Martinelli
Berfin Simsek
W. Gerstner
Johanni Brea
ArXivPDFHTML

Papers citing "Expand-and-Cluster: Parameter Recovery of Neural Networks"

10 / 10 papers shown
Title
Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks
Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks
Devon Jarvis
Richard Klein
Benjamin Rosman
Andrew M. Saxe
MLT
66
1
0
08 Mar 2025
Learning Gaussian Multi-Index Models with Gradient Flow: Time Complexity and Directional Convergence
Learning Gaussian Multi-Index Models with Gradient Flow: Time Complexity and Directional Convergence
Berfin Simsek
Amire Bendjeddou
Daniel Hsu
44
0
0
13 Nov 2024
Polynomial Time Cryptanalytic Extraction of Deep Neural Networks in the
  Hard-Label Setting
Polynomial Time Cryptanalytic Extraction of Deep Neural Networks in the Hard-Label Setting
Nicholas Carlini
J. Chávez-Saab
Anna Hambitzer
Francisco Rodríguez-Henríquez
Adi Shamir
AAML
27
1
0
08 Oct 2024
OTOV2: Automatic, Generic, User-Friendly
OTOV2: Automatic, Generic, User-Friendly
Tianyi Chen
Luming Liang
Tian Ding
Zhihui Zhu
Ilya Zharkov
VLM
MQ
45
31
0
13 Mar 2023
Toy Models of Superposition
Toy Models of Superposition
Nelson Elhage
Tristan Hume
Catherine Olsson
Nicholas Schiefer
T. Henighan
...
Sam McCandlish
Jared Kaplan
Dario Amodei
Martin Wattenberg
C. Olah
AAML
MILM
125
318
0
21 Sep 2022
Git Re-Basin: Merging Models modulo Permutation Symmetries
Git Re-Basin: Merging Models modulo Permutation Symmetries
Samuel K. Ainsworth
J. Hayase
S. Srinivasa
MoMe
255
314
0
11 Sep 2022
Sparsity in Deep Learning: Pruning and growth for efficient inference
  and training in neural networks
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
141
684
0
31 Jan 2021
A Survey on Neural Network Interpretability
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaML
XAI
144
661
0
28 Dec 2020
Cryptanalytic Extraction of Neural Network Models
Cryptanalytic Extraction of Neural Network Models
Nicholas Carlini
Matthew Jagielski
Ilya Mironov
FedML
MLAU
MIACV
AAML
72
134
0
10 Mar 2020
Trainability and Accuracy of Neural Networks: An Interacting Particle
  System Approach
Trainability and Accuracy of Neural Networks: An Interacting Particle System Approach
Grant M. Rotskoff
Eric Vanden-Eijnden
59
118
0
02 May 2018
1