ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.07682
  4. Cited By
Emergent Abilities of Large Language Models

Emergent Abilities of Large Language Models

15 June 2022
Jason W. Wei
Yi Tay
Rishi Bommasani
Colin Raffel
Barret Zoph
Sebastian Borgeaud
Dani Yogatama
Maarten Bosma
Denny Zhou
Donald Metzler
Ed H. Chi
Tatsunori Hashimoto
Oriol Vinyals
Percy Liang
J. Dean
W. Fedus
    ELM
    ReLM
    LRM
ArXivPDFHTML

Papers citing "Emergent Abilities of Large Language Models"

26 / 1,576 papers shown
Title
Scaling Laws vs Model Architectures: How does Inductive Bias Influence
  Scaling?
Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?
Yi Tay
Mostafa Dehghani
Samira Abnar
Hyung Won Chung
W. Fedus
J. Rao
Sharan Narang
Vinh Q. Tran
Dani Yogatama
Donald Metzler
AI4CE
34
100
0
21 Jul 2022
Language models show human-like content effects on reasoning tasks
Language models show human-like content effects on reasoning tasks
Ishita Dasgupta
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Hannah R. Sheahan
Antonia Creswell
D. Kumaran
James L. McClelland
Felix Hill
ReLM
LRM
30
181
0
14 Jul 2022
Combing for Credentials: Active Pattern Extraction from Smart Reply
Combing for Credentials: Active Pattern Extraction from Smart Reply
Bargav Jayaraman
Esha Ghosh
Melissa Chase
Sambuddha Roy
Wei Dai
David E. Evans
SILM
20
8
0
14 Jul 2022
TalkToModel: Explaining Machine Learning Models with Interactive Natural
  Language Conversations
TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations
Dylan Slack
Satyapriya Krishna
Himabindu Lakkaraju
Sameer Singh
27
74
0
08 Jul 2022
Language statistics at different spatial, temporal, and grammatical
  scales
Language statistics at different spatial, temporal, and grammatical scales
Fernanda Sánchez-Puig
Rogelio Lozano-Aranda
Dante Pérez-Méndez
E. Colman
Alfredo Morales-Guzmán
Carlos Pineda
Pedro J. Rivera Torres
C. Gershenson
25
0
0
02 Jul 2022
ProGen2: Exploring the Boundaries of Protein Language Models
ProGen2: Exploring the Boundaries of Protein Language Models
Erik Nijkamp
Jeffrey A. Ruffolo
Eli N. Weinstein
Nikhil Naik
Ali Madani
AI4TS
30
282
0
27 Jun 2022
Write and Paint: Generative Vision-Language Models are Unified Modal
  Learners
Write and Paint: Generative Vision-Language Models are Unified Modal Learners
Shizhe Diao
Wangchunshu Zhou
Xinsong Zhang
Jiawei Wang
MLLM
AI4CE
22
16
0
15 Jun 2022
Can Foundation Models Help Us Achieve Perfect Secrecy?
Can Foundation Models Help Us Achieve Perfect Secrecy?
Simran Arora
Christopher Ré
FedML
24
6
0
27 May 2022
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
328
4,077
0
24 May 2022
Evaluating and Inducing Personality in Pre-trained Language Models
Evaluating and Inducing Personality in Pre-trained Language Models
Guangyuan Jiang
Manjie Xu
Song-Chun Zhu
Wenjuan Han
Chi Zhang
Yixin Zhu
34
76
0
20 May 2022
UL2: Unifying Language Learning Paradigms
UL2: Unifying Language Learning Paradigms
Yi Tay
Mostafa Dehghani
Vinh Q. Tran
Xavier Garcia
Jason W. Wei
...
Tal Schuster
H. Zheng
Denny Zhou
N. Houlsby
Donald Metzler
AI4CE
59
297
0
10 May 2022
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
314
3,273
0
21 Mar 2022
Iteratively Prompt Pre-trained Language Models for Chain of Thought
Iteratively Prompt Pre-trained Language Models for Chain of Thought
Boshi Wang
Xiang Deng
Huan Sun
KELM
ReLM
LRM
24
95
0
16 Mar 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
345
12,003
0
04 Mar 2022
Probing Pretrained Models of Source Code
Probing Pretrained Models of Source Code
Sergey Troshin
Nadezhda Chirkova
ELM
27
38
0
16 Feb 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
398
8,559
0
28 Jan 2022
Black-box Prompt Learning for Pre-trained Language Models
Black-box Prompt Learning for Pre-trained Language Models
Shizhe Diao
Zhichao Huang
Ruijia Xu
Xuechun Li
Yong Lin
Xiao Zhou
Tong Zhang
VLM
AAML
36
68
0
21 Jan 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
215
1,661
0
15 Oct 2021
BBQ: A Hand-Built Bias Benchmark for Question Answering
BBQ: A Hand-Built Bias Benchmark for Question Answering
Alicia Parrish
Angelica Chen
Nikita Nangia
Vishakh Padmakumar
Jason Phang
Jana Thompson
Phu Mon Htut
Sam Bowman
223
374
0
15 Oct 2021
Pretrained Language Models are Symbolic Mathematics Solvers too!
Pretrained Language Models are Symbolic Mathematics Solvers too!
Kimia Noorbakhsh
Modar Sulaiman
M. Sharifi
Kallol Roy
Pooyan Jamshidi
LRM
28
18
0
07 Oct 2021
Unsolved Problems in ML Safety
Unsolved Problems in ML Safety
Dan Hendrycks
Nicholas Carlini
John Schulman
Jacob Steinhardt
186
275
0
28 Sep 2021
Frequency Effects on Syntactic Rule Learning in Transformers
Frequency Effects on Syntactic Rule Learning in Transformers
Jason W. Wei
Dan Garrette
Tal Linzen
Ellie Pavlick
88
62
0
14 Sep 2021
Deduplicating Training Data Makes Language Models Better
Deduplicating Training Data Makes Language Models Better
Katherine Lee
Daphne Ippolito
A. Nystrom
Chiyuan Zhang
Douglas Eck
Chris Callison-Burch
Nicholas Carlini
SyDa
242
593
0
14 Jul 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
243
1,924
0
31 Dec 2020
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
290
1,824
0
14 Dec 2020
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
264
4,489
0
23 Jan 2020
Previous
123...303132