ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.10047
  4. Cited By
UCD-CS at W-NUT 2020 Shared Task-3: A Text to Text Approach for COVID-19
  Event Extraction on Social Media

UCD-CS at W-NUT 2020 Shared Task-3: A Text to Text Approach for COVID-19 Event Extraction on Social Media

21 September 2020
Congcong Wang
David Lillis
ArXivPDFHTML

Papers citing "UCD-CS at W-NUT 2020 Shared Task-3: A Text to Text Approach for COVID-19 Event Extraction on Social Media"

11 / 11 papers shown
Title
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
755
41,932
0
28 May 2020
CrisisBERT: a Robust Transformer for Crisis Classification and
  Contextual Crisis Embedding
CrisisBERT: a Robust Transformer for Crisis Classification and Contextual Crisis Embedding
Junhua Liu
Trisha Singhal
L. Blessing
Kristin L. Wood
Kwan Hui Lim
30
68
0
11 May 2020
Rapidly Bootstrapping a Question Answering Dataset for COVID-19
Rapidly Bootstrapping a Question Answering Dataset for COVID-19
Raphael Tang
Rodrigo Nogueira
Edwin Zhang
Nikhil Gupta
P. Cẩm
Kyunghyun Cho
Jimmy J. Lin
47
70
0
23 Apr 2020
CORD-19: The COVID-19 Open Research Dataset
CORD-19: The COVID-19 Open Research Dataset
Lucy Lu Wang
Kyle Lo
Yoganand Chandrasekhar
Russell Reas
Jiangjiang Yang
...
Boya Xie
Douglas A. Raymond
Daniel S. Weld
Oren Etzioni
Sebastian Kohlmeier
94
810
0
22 Apr 2020
Train Large, Then Compress: Rethinking Model Size for Efficient Training
  and Inference of Transformers
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Zhuohan Li
Eric Wallace
Sheng Shen
Kevin Lin
Kurt Keutzer
Dan Klein
Joseph E. Gonzalez
94
151
0
26 Feb 2020
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language
  Generation, Translation, and Comprehension
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
M. Lewis
Yinhan Liu
Naman Goyal
Marjan Ghazvininejad
Abdel-rahman Mohamed
Omer Levy
Veselin Stoyanov
Luke Zettlemoyer
AIMat
VLM
246
10,829
0
29 Oct 2019
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
419
20,181
0
23 Oct 2019
The Curious Case of Neural Text Degeneration
The Curious Case of Neural Text Degeneration
Ari Holtzman
Jan Buys
Li Du
Maxwell Forbes
Yejin Choi
184
3,184
0
22 Apr 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.8K
94,891
0
11 Oct 2018
Deep contextualized word representations
Deep contextualized word representations
Matthew E. Peters
Mark Neumann
Mohit Iyyer
Matt Gardner
Christopher Clark
Kenton Lee
Luke Zettlemoyer
NAI
211
11,556
0
15 Feb 2018
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
701
131,652
0
12 Jun 2017
1