Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1804.07972
Cited By
Eval all, trust a few, do wrong to none: Comparing sentence generation models
21 April 2018
Ondřej Cífka
Aliaksei Severyn
Enrique Alfonseca
Katja Filippova
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Eval all, trust a few, do wrong to none: Comparing sentence generation models"
9 / 9 papers shown
Title
Measuring Diversity in Synthetic Datasets
Yuchang Zhu
Huizhe Zhang
Bingzhe Wu
Jintang Li
Zibin Zheng
Peilin Zhao
Liang Chen
Yatao Bian
100
0
0
12 Feb 2025
MAD Speech: Measures of Acoustic Diversity of Speech
Matthieu Futeral
A. Agostinelli
Marco Tagliasacchi
Neil Zeghidour
Eugene Kharitonov
54
1
0
16 Apr 2024
The Vendi Score: A Diversity Evaluation Metric for Machine Learning
Dan Friedman
Adji Bousso Dieng
EGVM
94
111
0
05 Oct 2022
How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating and Auditing Generative Models
Ahmed Alaa
B. V. Breugel
Evgeny S. Saveliev
M. Schaar
59
186
0
17 Feb 2021
Plug and Play Autoencoders for Conditional Text Generation
Florian Mai
Nikolaos Pappas
Ivan Montero
Noah A. Smith
U. Washington
25
36
0
06 Oct 2020
Educating Text Autoencoders: Latent Representation Guidance via Denoising
T. Shen
Jonas W. Mueller
Regina Barzilay
Tommi Jaakkola
19
4
0
29 May 2019
Judge the Judges: A Large-Scale Evaluation Study of Neural Language Models for Online Review Generation
Cristina Garbacea
Samuel Carton
Shiyan Yan
Qiaozhu Mei
ELM
25
29
0
02 Jan 2019
On Accurate Evaluation of GANs for Language Generation
Stanislau Semeniuta
Aliaksei Severyn
Sylvain Gelly
EGVM
39
81
0
13 Jun 2018
Assessing Generative Models via Precision and Recall
Mehdi S. M. Sajjadi
Olivier Bachem
Mario Lucic
Olivier Bousquet
Sylvain Gelly
EGVM
34
565
0
31 May 2018
1