Shared Task on Evaluating Accuracy in Natural Language Generation

Abstract
We propose a shared task on methodologies and algorithms for evaluating the accuracy of generated texts. Participants will measure the accuracy of basketball game summaries produced by NLG systems from basketball box score data.
View on arXivComments on this paper