ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.05955
  4. Cited By
RSO: A Gradient Free Sampling Based Approach For Training Deep Neural
  Networks

RSO: A Gradient Free Sampling Based Approach For Training Deep Neural Networks

12 May 2020
Rohun Tripathi
Bharat Singh
ArXivPDFHTML

Papers citing "RSO: A Gradient Free Sampling Based Approach For Training Deep Neural Networks"

16 / 16 papers shown
Title
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random
  Features in CNNs
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
Jonathan Frankle
D. Schwab
Ari S. Morcos
62
142
0
29 Feb 2020
What's Hidden in a Randomly Weighted Neural Network?
What's Hidden in a Randomly Weighted Neural Network?
Vivek Ramanujan
Mitchell Wortsman
Aniruddha Kembhavi
Ali Farhadi
Mohammad Rastegari
66
356
0
29 Nov 2019
Weight Agnostic Neural Networks
Weight Agnostic Neural Networks
Adam Gaier
David R Ha
OOD
60
241
0
11 Jun 2019
Putting An End to End-to-End: Gradient-Isolated Learning of
  Representations
Putting An End to End-to-End: Gradient-Isolated Learning of Representations
Sindy Löwe
Peter O'Connor
Bastiaan S. Veeling
SSL
106
144
0
28 May 2019
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
196
1,270
0
04 Oct 2018
Learning Overparameterized Neural Networks via Stochastic Gradient
  Descent on Structured Data
Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data
Yuanzhi Li
Yingyu Liang
MLT
194
653
0
03 Aug 2018
DARTS: Differentiable Architecture Search
DARTS: Differentiable Architecture Search
Hanxiao Liu
Karen Simonyan
Yiming Yang
185
4,345
0
24 Jun 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
204
3,457
0
09 Mar 2018
Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative
  for Training Deep Neural Networks for Reinforcement Learning
Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning
F. Such
Vashisht Madhavan
Edoardo Conti
Joel Lehman
Kenneth O. Stanley
Jeff Clune
90
691
0
18 Dec 2017
Evolution Strategies as a Scalable Alternative to Reinforcement Learning
Evolution Strategies as a Scalable Alternative to Reinforcement Learning
Tim Salimans
Jonathan Ho
Xi Chen
Szymon Sidor
Ilya Sutskever
92
1,536
0
10 Mar 2017
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
424
5,367
0
05 Nov 2016
Pruning Filters for Efficient ConvNets
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
188
3,693
0
31 Aug 2016
Training Neural Networks Without Gradients: A Scalable ADMM Approach
Training Neural Networks Without Gradients: A Scalable ADMM Approach
Gavin Taylor
R. Burmeister
Zheng Xu
Bharat Singh
Ankit B. Patel
Tom Goldstein
ODL
50
276
0
06 May 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.9K
193,426
0
10 Dec 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
  ImageNet Classification
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
280
18,587
0
06 Feb 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.4K
149,842
0
22 Dec 2014
1