ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.11989
  4. Cited By
Existence and Estimation of Critical Batch Size for Training Generative
  Adversarial Networks with Two Time-Scale Update Rule

Existence and Estimation of Critical Batch Size for Training Generative Adversarial Networks with Two Time-Scale Update Rule

28 January 2022
Naoki Sato
Hideaki Iiduka
    EGVM
ArXivPDFHTML

Papers citing "Existence and Estimation of Critical Batch Size for Training Generative Adversarial Networks with Two Time-Scale Update Rule"

14 / 14 papers shown
Title
Projected GANs Converge Faster
Projected GANs Converge Faster
Axel Sauer
Kashyap Chitta
Jens Muller
Andreas Geiger
83
234
0
01 Nov 2021
Low-Rank Subspaces in GANs
Low-Rank Subspaces in GANs
Jiapeng Zhu
Ruili Feng
Yujun Shen
Deli Zhao
Zhengjun Zha
Jingren Zhou
Qifeng Chen
GAN
45
71
0
08 Jun 2021
AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed
  Gradients
AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients
Juntang Zhuang
Tommy M. Tang
Yifan Ding
S. Tatikonda
Nicha Dvornek
X. Papademetris
James S. Duncan
ODL
162
514
0
15 Oct 2020
COT-GAN: Generating Sequential Data via Causal Optimal Transport
COT-GAN: Generating Sequential Data via Causal Optimal Transport
Tianlin Xu
L. Wenliang
Michael Munn
Beatrice Acciaio
GAN
CML
59
97
0
15 Jun 2020
Convergence rates for the stochastic gradient descent method for
  non-convex objective functions
Convergence rates for the stochastic gradient descent method for non-convex objective functions
Benjamin J. Fehrman
Benjamin Gess
Arnulf Jentzen
80
101
0
02 Apr 2019
Progressive Augmentation of GANs
Progressive Augmentation of GANs
Dan Zhang
Anna Khoreva
51
27
0
29 Jan 2019
Measuring the Effects of Data Parallelism on Neural Network Training
Measuring the Effects of Data Parallelism on Neural Network Training
Christopher J. Shallue
Jaehoon Lee
J. Antognini
J. Mamou
J. Ketterling
Yao Wang
82
409
0
08 Nov 2018
On the Convergence of A Class of Adam-Type Algorithms for Non-Convex
  Optimization
On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization
Xiangyi Chen
Sijia Liu
Ruoyu Sun
Mingyi Hong
55
323
0
08 Aug 2018
Large Batch Training of Convolutional Networks
Large Batch Training of Convolutional Networks
Yang You
Igor Gitman
Boris Ginsburg
ODL
128
848
0
13 Aug 2017
Gradient descent GAN optimization is locally stable
Gradient descent GAN optimization is locally stable
Vaishnavh Nagarajan
J. Zico Kolter
GAN
71
348
0
13 Jun 2017
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
Priya Goyal
Piotr Dollár
Ross B. Girshick
P. Noordhuis
Lukasz Wesolowski
Aapo Kyrola
Andrew Tulloch
Yangqing Jia
Kaiming He
3DH
126
3,681
0
08 Jun 2017
Improved Training of Wasserstein GANs
Improved Training of Wasserstein GANs
Ishaan Gulrajani
Faruk Ahmed
Martín Arjovsky
Vincent Dumoulin
Aaron Courville
GAN
203
9,548
0
31 Mar 2017
Unsupervised Representation Learning with Deep Convolutional Generative
  Adversarial Networks
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
Alec Radford
Luke Metz
Soumith Chintala
GAN
OOD
256
14,012
0
19 Nov 2015
Deep Learning Face Attributes in the Wild
Deep Learning Face Attributes in the Wild
Ziwei Liu
Ping Luo
Xiaogang Wang
Xiaoou Tang
CVBM
244
8,408
0
28 Nov 2014
1