ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.09548
  4. Cited By
Path Regularization: A Convexity and Sparsity Inducing Regularization
  for Parallel ReLU Networks

Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks

18 October 2021
Tolga Ergen
Mert Pilanci
ArXivPDFHTML

Papers citing "Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks"

14 / 14 papers shown
Title
Geometric Algebra Planes: Convex Implicit Neural Volumes
Geometric Algebra Planes: Convex Implicit Neural Volumes
Irmak Sivgin
Sara Fridovich-Keil
Gordon Wetzstein
Mert Pilanci
73
1
0
20 Nov 2024
Approaching Deep Learning through the Spectral Dynamics of Weights
Approaching Deep Learning through the Spectral Dynamics of Weights
David Yunis
Kumar Kshitij Patel
Samuel Wheeler
Pedro H. P. Savarese
Gal Vardi
Karen Livescu
Michael Maire
Matthew R. Walter
52
3
0
21 Aug 2024
Analyzing Neural Network-Based Generative Diffusion Models through
  Convex Optimization
Analyzing Neural Network-Based Generative Diffusion Models through Convex Optimization
Fangzhao Zhang
Mert Pilanci
DiffM
45
3
0
03 Feb 2024
Fixing the NTK: From Neural Network Linearizations to Exact Convex
  Programs
Fixing the NTK: From Neural Network Linearizations to Exact Convex Programs
Rajat Vadiraj Dwaraknath
Tolga Ergen
Mert Pilanci
35
0
0
26 Sep 2023
ReLU Neural Networks with Linear Layers are Biased Towards Single- and Multi-Index Models
ReLU Neural Networks with Linear Layers are Biased Towards Single- and Multi-Index Models
Suzanna Parkinson
Greg Ongie
Rebecca Willett
60
6
0
24 May 2023
When Deep Learning Meets Polyhedral Theory: A Survey
When Deep Learning Meets Polyhedral Theory: A Survey
Joey Huchette
Gonzalo Muñoz
Thiago Serra
Calvin Tsay
AI4CE
94
32
0
29 Apr 2023
Globally Optimal Training of Neural Networks with Threshold Activation
  Functions
Globally Optimal Training of Neural Networks with Threshold Activation Functions
Tolga Ergen
Halil Ibrahim Gulluk
Jonathan Lacotte
Mert Pilanci
65
8
0
06 Mar 2023
The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in
  Transformers
The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in Transformers
Zong-xiao Li
Chong You
Srinadh Bhojanapalli
Daliang Li
A. S. Rawat
...
Kenneth Q Ye
Felix Chern
Felix X. Yu
Ruiqi Guo
Surinder Kumar
MoE
27
87
0
12 Oct 2022
Deep Learning meets Nonparametric Regression: Are Weight-Decayed DNNs
  Locally Adaptive?
Deep Learning meets Nonparametric Regression: Are Weight-Decayed DNNs Locally Adaptive?
Kaiqi Zhang
Yu-Xiang Wang
17
12
0
20 Apr 2022
Parallel Deep Neural Networks Have Zero Duality Gap
Parallel Deep Neural Networks Have Zero Duality Gap
Yifei Wang
Tolga Ergen
Mert Pilanci
79
10
0
13 Oct 2021
Hidden Convexity of Wasserstein GANs: Interpretable Generative Models
  with Closed-Form Solutions
Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions
Arda Sahiner
Tolga Ergen
Batu Mehmet Ozturkler
Burak Bartan
John M. Pauly
Morteza Mardani
Mert Pilanci
GAN
20
20
0
12 Jul 2021
Demystifying Batch Normalization in ReLU Networks: Equivalent Convex
  Optimization Models and Implicit Regularization
Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization
Tolga Ergen
Arda Sahiner
Batu Mehmet Ozturkler
John M. Pauly
Morteza Mardani
Mert Pilanci
14
31
0
02 Mar 2021
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
297
10,220
0
16 Nov 2016
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
119
577
0
27 Feb 2015
1