Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.05271
Cited By
GAN You Hear Me? Reclaiming Unconditional Speech Synthesis from Diffusion Models
11 October 2022
Matthew Baas
Herman Kamper
DiffM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"GAN You Hear Me? Reclaiming Unconditional Speech Synthesis from Diffusion Models"
8 / 8 papers shown
Title
RAVE for Speech: Efficient Voice Conversion at High Sampling Rates
A. R. Bargum
Simon Lajboschitz
Cumhur Erkut
37
1
0
29 Aug 2024
EDMSound: Spectrogram Based Diffusion Models for Efficient and High-Quality Audio Synthesis
Ge Zhu
Yutong Wen
M. Carbonneau
Zhiyao Duan
DiffM
51
7
0
15 Nov 2023
Disentanglement in a GAN for Unconditional Speech Synthesis
Matthew Baas
Herman Kamper
DiffM
32
2
0
04 Jul 2023
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
Yinghao Aaron Li
Cong Han
Vinay S. Raghavan
Gavin Mischler
N. Mesgarani
VLM
DiffM
37
107
0
13 Jun 2023
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
255
4,796
0
24 Feb 2021
Generative Spoken Language Modeling from Raw Audio
Kushal Lakhotia
Evgeny Kharitonov
Wei-Ning Hsu
Yossi Adi
Adam Polyak
...
Tu Nguyen
Jade Copet
Alexei Baevski
A. Mohamed
Emmanuel Dupoux
AuLLM
191
337
0
01 Feb 2021
A Style-Based Generator Architecture for Generative Adversarial Networks
Tero Karras
S. Laine
Timo Aila
306
10,368
0
12 Dec 2018
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Zhuowen Tu
Kaiming He
297
10,225
0
16 Nov 2016
1