ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.14165
  4. Cited By
Language Models are Few-Shot Learners

Language Models are Few-Shot Learners

28 May 2020
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
Prafulla Dhariwal
Arvind Neelakantan
Pranav Shyam
Girish Sastry
Amanda Askell
Sandhini Agarwal
Ariel Herbert-Voss
Gretchen Krueger
T. Henighan
R. Child
Aditya A. Ramesh
Daniel M. Ziegler
Jeff Wu
Clemens Winter
Christopher Hesse
Mark Chen
Eric Sigler
Ma-teusz Litwin
Scott Gray
B. Chess
Jack Clark
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
    BDL
ArXivPDFHTML

Papers citing "Language Models are Few-Shot Learners"

50 / 11,025 papers shown
Title
Relational World Knowledge Representation in Contextual Language Models:
  A Review
Relational World Knowledge Representation in Contextual Language Models: A Review
Tara Safavi
Danai Koutra
KELM
35
51
0
12 Apr 2021
Survey on reinforcement learning for language processing
Survey on reinforcement learning for language processing
Víctor Uc Cetina
Nicolás Navarro-Guerrero
A. Martín-González
C. Weber
S. Wermter
OffRL
24
101
0
12 Apr 2021
FUDGE: Controlled Text Generation With Future Discriminators
FUDGE: Controlled Text Generation With Future Discriminators
Kevin Kaichuang Yang
Dan Klein
27
313
0
12 Apr 2021
Adapting Language Models for Zero-shot Learning by Meta-tuning on
  Dataset and Prompt Collections
Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections
Ruiqi Zhong
Kristy Lee
Zheng-Wei Zhang
Dan Klein
33
166
0
10 Apr 2021
Text2Chart: A Multi-Staged Chart Generator from Natural Language Text
Text2Chart: A Multi-Staged Chart Generator from Natural Language Text
Md. Mahinur Rashid
Hasin Kawsar Jahan
Annysha Huzzat
Riyasaat Ahmed Rahul
Tamim Bin Zakir
F. Meem
Md. Saddam Hossain Mukta
Swakkhar Shatabda
35
8
0
09 Apr 2021
Efficient Large-Scale Language Model Training on GPU Clusters Using
  Megatron-LM
Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM
Deepak Narayanan
M. Shoeybi
Jared Casper
P. LeGresley
M. Patwary
...
Prethvi Kashinkunti
J. Bernauer
Bryan Catanzaro
Amar Phanishayee
Matei A. Zaharia
MoE
37
646
0
09 Apr 2021
SiT: Self-supervised vIsion Transformer
SiT: Self-supervised vIsion Transformer
Sara Atito Ali Ahmed
Muhammad Awais
J. Kittler
ViT
39
139
0
08 Apr 2021
Low-Regret Active learning
Low-Regret Active learning
Cenk Baykal
Lucas Liebenwein
Dan Feldman
Daniela Rus
UQCV
36
3
0
06 Apr 2021
Comparing Transfer and Meta Learning Approaches on a Unified Few-Shot
  Classification Benchmark
Comparing Transfer and Meta Learning Approaches on a Unified Few-Shot Classification Benchmark
Vincent Dumoulin
N. Houlsby
Utku Evci
Xiaohua Zhai
Ross Goroshin
Sylvain Gelly
Hugo Larochelle
27
26
0
06 Apr 2021
Enabling Inference Privacy with Adaptive Noise Injection
Enabling Inference Privacy with Adaptive Noise Injection
Sanjay Kariyappa
Ousmane Amadou Dia
Moinuddin K. Qureshi
16
5
0
06 Apr 2021
Distributed Learning in Wireless Networks: Recent Progress and Future
  Challenges
Distributed Learning in Wireless Networks: Recent Progress and Future Challenges
Mingzhe Chen
Deniz Gündüz
Kaibin Huang
Walid Saad
M. Bennis
Aneta Vulgarakis Feljan
H. Vincent Poor
38
401
0
05 Apr 2021
What Will it Take to Fix Benchmarking in Natural Language Understanding?
What Will it Take to Fix Benchmarking in Natural Language Understanding?
Samuel R. Bowman
George E. Dahl
ELM
ALM
30
156
0
05 Apr 2021
Compressing Visual-linguistic Model via Knowledge Distillation
Compressing Visual-linguistic Model via Knowledge Distillation
Zhiyuan Fang
Jianfeng Wang
Xiaowei Hu
Lijuan Wang
Yezhou Yang
Zicheng Liu
VLM
39
97
0
05 Apr 2021
Efficient Transformers in Reinforcement Learning using Actor-Learner
  Distillation
Efficient Transformers in Reinforcement Learning using Actor-Learner Distillation
Emilio Parisotto
Ruslan Salakhutdinov
42
44
0
04 Apr 2021
Recommending Metamodel Concepts during Modeling Activities with
  Pre-Trained Language Models
Recommending Metamodel Concepts during Modeling Activities with Pre-Trained Language Models
Martin Weyssow
H. Sahraoui
Eugene Syriani
16
50
0
04 Apr 2021
Deepfake Detection Scheme Based on Vision Transformer and Distillation
Deepfake Detection Scheme Based on Vision Transformer and Distillation
Young-Jin Heo
Y. Choi
Young-Woon Lee
Byung-Gyu Kim
ViT
17
55
0
03 Apr 2021
Towards General Purpose Vision Systems
Towards General Purpose Vision Systems
Tanmay Gupta
Amita Kamath
Aniruddha Kembhavi
Derek Hoiem
11
50
0
01 Apr 2021
Going deeper with Image Transformers
Going deeper with Image Transformers
Hugo Touvron
Matthieu Cord
Alexandre Sablayrolles
Gabriel Synnaeve
Hervé Jégou
ViT
27
986
0
31 Mar 2021
Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human
  Videos
Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human Videos
Annie S. Chen
Suraj Nair
Chelsea Finn
38
137
0
31 Mar 2021
EnergyVis: Interactively Tracking and Exploring Energy Consumption for
  ML Models
EnergyVis: Interactively Tracking and Exploring Energy Consumption for ML Models
Omar Shaikh
Jon Saad-Falcon
Austin P. Wright
Nilaksh Das
Scott Freitas
O. Asensio
Duen Horng Chau
27
18
0
30 Mar 2021
Retraining DistilBERT for a Voice Shopping Assistant by Using Universal
  Dependencies
Retraining DistilBERT for a Voice Shopping Assistant by Using Universal Dependencies
P. Jayarao
Arpit Sharma
16
2
0
29 Mar 2021
ViViT: A Video Vision Transformer
ViViT: A Video Vision Transformer
Anurag Arnab
Mostafa Dehghani
G. Heigold
Chen Sun
Mario Lucic
Cordelia Schmid
ViT
30
2,088
0
29 Mar 2021
On the Adversarial Robustness of Vision Transformers
On the Adversarial Robustness of Vision Transformers
Rulin Shao
Zhouxing Shi
Jinfeng Yi
Pin-Yu Chen
Cho-Jui Hsieh
ViT
33
137
0
29 Mar 2021
Machine Learning Meets Natural Language Processing -- The story so far
Machine Learning Meets Natural Language Processing -- The story so far
N. Galanis
P. Vafiadis
K.-G. Mirzaev
G. Papakostas
38
6
0
27 Mar 2021
Data Augmentation in Natural Language Processing: A Novel Text
  Generation Approach for Long and Short Text Classifiers
Data Augmentation in Natural Language Processing: A Novel Text Generation Approach for Long and Short Text Classifiers
Markus Bayer
M. Kaufhold
Björn Buchhold
Marcel Keller
J. Dallmeyer
Christian A. Reuter
25
113
0
26 Mar 2021
Vision Transformers for Dense Prediction
Vision Transformers for Dense Prediction
René Ranftl
Alexey Bochkovskiy
V. Koltun
ViT
MDE
45
1,662
0
24 Mar 2021
FastMoE: A Fast Mixture-of-Expert Training System
FastMoE: A Fast Mixture-of-Expert Training System
Jiaao He
J. Qiu
Aohan Zeng
Zhilin Yang
Jidong Zhai
Jie Tang
ALM
MoE
33
94
0
24 Mar 2021
Representing Numbers in NLP: a Survey and a Vision
Representing Numbers in NLP: a Survey and a Vision
Avijit Thawani
Jay Pujara
Pedro A. Szekely
Filip Ilievski
32
114
0
24 Mar 2021
Finetuning Pretrained Transformers into RNNs
Finetuning Pretrained Transformers into RNNs
Jungo Kasai
Hao Peng
Yizhe Zhang
Dani Yogatama
Gabriel Ilharco
Nikolaos Pappas
Yi Mao
Weizhu Chen
Noah A. Smith
38
63
0
24 Mar 2021
Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning
  Performance of GPT-2
Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-2
Gregor Betz
Kyle Richardson
Christian Voigt
ReLM
LRM
24
30
0
24 Mar 2021
Multi-view 3D Reconstruction with Transformer
Multi-view 3D Reconstruction with Transformer
Dan Wang
Xinrui Cui
Xun Chen
Zhengxia Zou
Tianyang Shi
Septimiu Salcudean
Z. J. Wang
Rabab Ward
ViT
22
87
0
24 Mar 2021
The NLP Cookbook: Modern Recipes for Transformer based Deep Learning
  Architectures
The NLP Cookbook: Modern Recipes for Transformer based Deep Learning Architectures
Sushant Singh
A. Mahmood
AI4TS
60
92
0
23 Mar 2021
How to decay your learning rate
How to decay your learning rate
Aitor Lewkowycz
41
24
0
23 Mar 2021
Are Neural Language Models Good Plagiarists? A Benchmark for Neural
  Paraphrase Detection
Are Neural Language Models Good Plagiarists? A Benchmark for Neural Paraphrase Detection
Jan Philip Wahle
Terry Ruas
Norman Meuschke
Bela Gipp
27
34
0
23 Mar 2021
Detecting Hate Speech with GPT-3
Detecting Hate Speech with GPT-3
Ke-Li Chiu
Annie Collins
Rohan Alexander
AILaw
22
108
0
23 Mar 2021
Tiny Transformers for Environmental Sound Classification at the Edge
Tiny Transformers for Environmental Sound Classification at the Edge
David Elliott
Carlos E. Otero
Steven Wyatt
Evan Martino
21
15
0
22 Mar 2021
End-to-End Trainable Multi-Instance Pose Estimation with Transformers
End-to-End Trainable Multi-Instance Pose Estimation with Transformers
Lucas Stoffl
Maxime Vidal
Alexander Mathis
ViT
23
49
0
22 Mar 2021
Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets
Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets
Julia Kreutzer
Isaac Caswell
Lisa Wang
Ahsan Wahab
D. Esch
...
Duygu Ataman
Orevaoghene Ahia
Oghenefego Ahia
Sweta Agrawal
Mofetoluwa Adeyemi
20
267
0
22 Mar 2021
Improving and Simplifying Pattern Exploiting Training
Improving and Simplifying Pattern Exploiting Training
Derek Tam
Rakesh R Menon
Joey Tianyi Zhou
Shashank Srivastava
Colin Raffel
18
149
0
22 Mar 2021
DeepViT: Towards Deeper Vision Transformer
DeepViT: Towards Deeper Vision Transformer
Daquan Zhou
Bingyi Kang
Xiaojie Jin
Linjie Yang
Xiaochen Lian
Zihang Jiang
Qibin Hou
Jiashi Feng
ViT
42
510
0
22 Mar 2021
Attribute Alignment: Controlling Text Generation from Pre-trained
  Language Models
Attribute Alignment: Controlling Text Generation from Pre-trained Language Models
Dian Yu
Zhou Yu
Kenji Sagae
18
37
0
20 Mar 2021
Paint by Word
Paint by Word
A. Andonian
David Bau
Audrey Cui
YeonHwan Park
Ali Jahanian
Antonio Torralba
A. Oliva
DiffM
20
125
0
19 Mar 2021
GPT Understands, Too
GPT Understands, Too
Xiao Liu
Yanan Zheng
Zhengxiao Du
Ming Ding
Yujie Qian
Zhilin Yang
Jie Tang
VLM
78
1,144
0
18 Mar 2021
Towards Few-Shot Fact-Checking via Perplexity
Towards Few-Shot Fact-Checking via Perplexity
Nayeon Lee
Yejin Bang
Andrea Madotto
Madian Khabsa
Pascale Fung
AAML
13
90
0
17 Mar 2021
How Many Data Points is a Prompt Worth?
How Many Data Points is a Prompt Worth?
Teven Le Scao
Alexander M. Rush
VLM
66
296
0
15 Mar 2021
A Whole Brain Probabilistic Generative Model: Toward Realizing Cognitive
  Architectures for Developmental Robots
A Whole Brain Probabilistic Generative Model: Toward Realizing Cognitive Architectures for Developmental Robots
T. Taniguchi
Hiroshi Yamakawa
Takayuki Nagai
Kenji Doya
M. Sakagami
Masahiro Suzuki
Tomoaki Nakamura
Akira Taniguchi
28
23
0
15 Mar 2021
Revisiting ResNets: Improved Training and Scaling Strategies
Revisiting ResNets: Improved Training and Scaling Strategies
Irwan Bello
W. Fedus
Xianzhi Du
E. D. Cubuk
A. Srinivas
Nayeon Lee
Jonathon Shlens
Barret Zoph
29
298
0
13 Mar 2021
Inductive Relation Prediction by BERT
Inductive Relation Prediction by BERT
H. Zha
Zhiyu Zoey Chen
Xifeng Yan
29
54
0
12 Mar 2021
CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language
  Representation
CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
J. Clark
Dan Garrette
Iulia Turc
John Wieting
36
210
0
11 Mar 2021
Integration of Convolutional Neural Networks in Mobile Applications
Integration of Convolutional Neural Networks in Mobile Applications
Roger Creus Castanyer
Silverio Martínez-Fernández
Xavier Franch
23
12
0
11 Mar 2021
Previous
123...214215216...219220221
Next