ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.07030
  4. Cited By
SGD Learns One-Layer Networks in WGANs

SGD Learns One-Layer Networks in WGANs

15 October 2019
Qi Lei
J. Lee
A. Dimakis
C. Daskalakis
    GAN
ArXivPDFHTML

Papers citing "SGD Learns One-Layer Networks in WGANs"

7 / 7 papers shown
Title
Two-Timescale Gradient Descent Ascent Algorithms for Nonconvex Minimax Optimization
Two-Timescale Gradient Descent Ascent Algorithms for Nonconvex Minimax Optimization
Tianyi Lin
Chi Jin
Michael I. Jordan
52
7
0
28 Jan 2025
A Mathematical Framework for Learning Probability Distributions
A Mathematical Framework for Learning Probability Distributions
Hongkang Yang
31
7
0
22 Dec 2022
Faster Single-loop Algorithms for Minimax Optimization without Strong
  Concavity
Faster Single-loop Algorithms for Minimax Optimization without Strong Concavity
Junchi Yang
Antonio Orvieto
Aurelien Lucchi
Niao He
27
62
0
10 Dec 2021
Generalization Error of GAN from the Discriminator's Perspective
Generalization Error of GAN from the Discriminator's Perspective
Hongkang Yang
Weinan E
GAN
46
13
0
08 Jul 2021
Understanding Overparameterization in Generative Adversarial Networks
Understanding Overparameterization in Generative Adversarial Networks
Yogesh Balaji
M. Sajedi
Neha Kalibhat
Mucong Ding
Dominik Stöger
Mahdi Soltanolkotabi
S. Feizi
AI4CE
22
21
0
12 Apr 2021
The Complexity of Nonconvex-Strongly-Concave Minimax Optimization
The Complexity of Nonconvex-Strongly-Concave Minimax Optimization
Siqi Zhang
Junchi Yang
Cristóbal Guzmán
Negar Kiyavash
Niao He
33
61
0
29 Mar 2021
GANs May Have No Nash Equilibria
GANs May Have No Nash Equilibria
Farzan Farnia
Asuman Ozdaglar
GAN
28
43
0
21 Feb 2020
1