ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.07343
  4. Cited By
How Do Large Language Models Capture the Ever-changing World Knowledge?
  A Review of Recent Advances

How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances

11 October 2023
Zihan Zhang
Meng Fang
Lingxi Chen
Mohammad-Reza Namazi-Rad
Jun Wang
    KELM
ArXiv (abs)PDFHTMLGithub (134★)

Papers citing "How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances"

50 / 151 papers shown
Title
Semiparametric Language Models Are Scalable Continual Learners
Semiparametric Language Models Are Scalable Continual Learners
Guangyue Peng
Tao Ge
Si-Qing Chen
Furu Wei
Houfeng Wang
KELM
77
11
0
02 Mar 2023
Edit at your own risk: evaluating the robustness of edited models to
  distribution shifts
Edit at your own risk: evaluating the robustness of edited models to distribution shifts
Davis Brown
Charles Godfrey
Cody Nizinski
Jonathan Tu
Henry Kvinge
KELM
77
8
0
28 Feb 2023
KILM: Knowledge Injection into Encoder-Decoder Language Models
KILM: Knowledge Injection into Encoder-Decoder Language Models
Yan Xu
Mahdi Namazifar
Devamanyu Hazarika
Aishwarya Padmakumar
Yang Liu
Dilek Z. Hakkani-Tür
KELM
63
27
0
17 Feb 2023
Continual Pre-training of Language Models
Continual Pre-training of Language Models
Zixuan Ke
Yijia Shao
Haowei Lin
Tatsuya Konishi
Gyuhak Kim
Bin Liu
CLLKELM
123
139
0
07 Feb 2023
Large Language Models Can Be Easily Distracted by Irrelevant Context
Large Language Models Can Be Easily Distracted by Irrelevant Context
Freda Shi
Xinyun Chen
Kanishka Misra
Nathan Scales
David Dohan
Ed H. Chi
Nathanael Scharli
Denny Zhou
ReLMRALMLRM
108
597
0
31 Jan 2023
REPLUG: Retrieval-Augmented Black-Box Language Models
REPLUG: Retrieval-Augmented Black-Box Language Models
Weijia Shi
Sewon Min
Michihiro Yasunaga
Minjoon Seo
Rich James
M. Lewis
Luke Zettlemoyer
Wen-tau Yih
RALMVLMKELM
162
642
0
30 Jan 2023
Transformer-Patcher: One Mistake worth One Neuron
Transformer-Patcher: One Mistake worth One Neuron
Zeyu Huang
Songlin Yang
Xiaofeng Zhang
Jie Zhou
Wenge Rong
Zhang Xiong
KELM
97
179
0
24 Jan 2023
Does Localization Inform Editing? Surprising Differences in
  Causality-Based Localization vs. Knowledge Editing in Language Models
Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models
Peter Hase
Joey Tianyi Zhou
Been Kim
Asma Ghandeharioun
MILM
110
187
0
10 Jan 2023
Rethinking with Retrieval: Faithful Large Language Model Inference
Rethinking with Retrieval: Faithful Large Language Model Inference
Hangfeng He
Hongming Zhang
Dan Roth
KELMLRM
227
168
0
31 Dec 2022
Demonstrate-Search-Predict: Composing retrieval and language models for
  knowledge-intensive NLP
Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP
Omar Khattab
Keshav Santhanam
Xiang Lisa Li
David Leo Wright Hall
Percy Liang
Christopher Potts
Matei A. Zaharia
RALMKELM
96
269
0
28 Dec 2022
Large Language Models Encode Clinical Knowledge
Large Language Models Encode Clinical Knowledge
K. Singhal
Shekoofeh Azizi
T. Tu
S. S. Mahdavi
Jason W. Wei
...
A. Rajkomar
Joelle Barral
Christopher Semturs
Alan Karthikesalingam
Vivek Natarajan
LM&MAELMAI4MH
161
2,381
0
26 Dec 2022
When Not to Trust Language Models: Investigating Effectiveness of
  Parametric and Non-Parametric Memories
When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories
Alex Troy Mallen
Akari Asai
Victor Zhong
Rajarshi Das
Daniel Khashabi
Hannaneh Hajishirzi
RALMHILMKELM
120
610
0
20 Dec 2022
Interleaving Retrieval with Chain-of-Thought Reasoning for
  Knowledge-Intensive Multi-Step Questions
Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions
H. Trivedi
Niranjan Balasubramanian
Tushar Khot
Ashish Sabharwal
KELMRALMLRM
139
470
0
20 Dec 2022
Retrieval as Attention: End-to-end Learning of Retrieval and Reading
  within a Single Transformer
Retrieval as Attention: End-to-end Learning of Retrieval and Reading within a Single Transformer
Zhengbao Jiang
Luyu Gao
Jun Araki
Haibo Ding
Zhiruo Wang
Jamie Callan
Graham Neubig
RALM
126
43
0
05 Dec 2022
Continual Learning of Natural Language Processing Tasks: A Survey
Continual Learning of Natural Language Processing Tasks: A Survey
Zixuan Ke
Bin Liu
KELMCLLVLM
94
79
0
23 Nov 2022
Aging with GRACE: Lifelong Model Editing with Discrete Key-Value
  Adaptors
Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors
Thomas Hartvigsen
S. Sankaranarayanan
Hamid Palangi
Yoon Kim
Marzyeh Ghassemi
KELM
129
177
0
20 Nov 2022
A Survey of Knowledge Enhanced Pre-trained Language Models
A Survey of Knowledge Enhanced Pre-trained Language Models
Linmei Hu
Zeyi Liu
Ziwang Zhao
Lei Hou
Liqiang Nie
Juanzi Li
KELMVLM
119
136
0
11 Nov 2022
DisentQA: Disentangling Parametric and Contextual Knowledge with
  Counterfactual Question Answering
DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering
Ella Neeman
Roee Aharoni
Or Honovich
Leshem Choshen
Idan Szpektor
Omri Abend
KELMCML
100
83
0
10 Nov 2022
Large Language Models with Controllable Working Memory
Large Language Models with Controllable Working Memory
Daliang Li
A. S. Rawat
Manzil Zaheer
Xin Wang
Michal Lukasik
Andreas Veit
Felix X. Yu
Surinder Kumar
KELM
123
170
0
09 Nov 2022
You can't pick your neighbors, or can you? When and how to rely on
  retrieval in the $k$NN-LM
You can't pick your neighbors, or can you? When and how to rely on retrieval in the kkkNN-LM
Andrew Drozdov
Shufan Wang
Razieh Rahimi
Andrew McCallum
Hamed Zamani
Mohit Iyyer
RALM
194
17
0
28 Oct 2022
Rich Knowledge Sources Bring Complex Knowledge Conflicts: Recalibrating
  Models to Reflect Conflicting Evidence
Rich Knowledge Sources Bring Complex Knowledge Conflicts: Recalibrating Models to Reflect Conflicting Evidence
Hung-Ting Chen
Michael J.Q. Zhang
Eunsol Choi
RALMHILM
134
99
0
25 Oct 2022
Prompting GPT-3 To Be Reliable
Prompting GPT-3 To Be Reliable
Chenglei Si
Zhe Gan
Zhengyuan Yang
Shuohang Wang
Jianfeng Wang
Jordan L. Boyd-Graber
Lijuan Wang
KELMLRM
98
302
0
17 Oct 2022
RARR: Researching and Revising What Language Models Say, Using Language
  Models
RARR: Researching and Revising What Language Models Say, Using Language Models
Luyu Gao
Zhuyun Dai
Panupong Pasupat
Anthony Chen
Arun Tejasvi Chaganty
...
Vincent Zhao
Ni Lao
Hongrae Lee
Da-Cheng Juan
Kelvin Guu
HILMKELM
97
258
0
17 Oct 2022
Mass-Editing Memory in a Transformer
Mass-Editing Memory in a Transformer
Kevin Meng
Arnab Sen Sharma
A. Andonian
Yonatan Belinkov
David Bau
KELMVLM
152
599
0
13 Oct 2022
Continual Training of Language Models for Few-Shot Learning
Continual Training of Language Models for Few-Shot Learning
Zixuan Ke
Haowei Lin
Yijia Shao
Hu Xu
Lei Shu
Bin Liu
KELMBDLCLL
124
35
0
11 Oct 2022
Measuring and Narrowing the Compositionality Gap in Language Models
Measuring and Narrowing the Compositionality Gap in Language Models
Ofir Press
Muru Zhang
Sewon Min
Ludwig Schmidt
Noah A. Smith
M. Lewis
ReLMKELMLRM
202
643
0
07 Oct 2022
Calibrating Factual Knowledge in Pretrained Language Models
Calibrating Factual Knowledge in Pretrained Language Models
Qingxiu Dong
Damai Dai
Yifan Song
Jingjing Xu
Zhifang Sui
Lei Li
KELM
299
90
0
07 Oct 2022
ReAct: Synergizing Reasoning and Acting in Language Models
ReAct: Synergizing Reasoning and Acting in Language Models
Shunyu Yao
Jeffrey Zhao
Dian Yu
Nan Du
Izhak Shafran
Karthik Narasimhan
Yuan Cao
LLMAGReLMLRM
450
2,982
0
06 Oct 2022
Decomposed Prompting: A Modular Approach for Solving Complex Tasks
Decomposed Prompting: A Modular Approach for Solving Complex Tasks
Tushar Khot
H. Trivedi
Matthew Finlayson
Yao Fu
Kyle Richardson
Peter Clark
Ashish Sabharwal
ReLMLRM
145
452
0
05 Oct 2022
LM-CORE: Language Models with Contextually Relevant External Knowledge
LM-CORE: Language Models with Contextually Relevant External Knowledge
Jivat Neet Kaur
S. Bhatia
Milan Aggarwal
Rachit Bansal
Balaji Krishnamurthy
KELM
67
13
0
12 Aug 2022
BlenderBot 3: a deployed conversational agent that continually learns to
  responsibly engage
BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage
Kurt Shuster
Jing Xu
M. Komeili
Da Ju
Eric Michael Smith
...
Naman Goyal
Arthur Szlam
Y-Lan Boureau
Melanie Kambadur
Jason Weston
LM&RoKELM
114
242
0
05 Aug 2022
RealTime QA: What's the Answer Right Now?
RealTime QA: What's the Answer Right Now?
Jungo Kasai
Keisuke Sakaguchi
Yoichi Takahashi
Ronan Le Bras
Akari Asai
Xinyan Velocity Yu
Dragomir R. Radev
Noah A. Smith
Yejin Choi
Kentaro Inui
KELM
148
194
0
27 Jul 2022
Memory-Based Model Editing at Scale
Memory-Based Model Editing at Scale
E. Mitchell
Charles Lin
Antoine Bosselut
Christopher D. Manning
Chelsea Finn
KELM
114
361
0
13 Jun 2022
kNN-Prompt: Nearest Neighbor Zero-Shot Inference
kNN-Prompt: Nearest Neighbor Zero-Shot Inference
Weijia Shi
Julian Michael
Suchin Gururangan
Luke Zettlemoyer
RALMVLM
78
32
0
27 May 2022
Language Anisotropic Cross-Lingual Model Editing
Language Anisotropic Cross-Lingual Model Editing
Yang Xu
Yutai Hou
Wanxiang Che
Min Zhang
KELM
147
28
0
25 May 2022
Fine-tuned Language Models are Continual Learners
Fine-tuned Language Models are Continual Learners
Thomas Scialom
Tuhin Chakrabarty
Smaranda Muresan
CLLLRM
191
123
0
24 May 2022
StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in
  Question Answering Models
StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in Question Answering Models
Adam Livska
Tomávs Kovciský
E. Gribovskaya
Tayfun Terzi
Eren Sezener
...
Susannah Young
Ellen Gilsenan-McMahon
Sophia Austin
Phil Blunsom
Angeliki Lazaridou
KELM
294
104
0
23 May 2022
Memorization Without Overfitting: Analyzing the Training Dynamics of
  Large Language Models
Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models
Kushal Tirumala
Aram H. Markosyan
Luke Zettlemoyer
Armen Aghajanyan
TDI
112
197
0
22 May 2022
On Continual Model Refinement in Out-of-Distribution Data Streams
On Continual Model Refinement in Out-of-Distribution Data Streams
Bill Yuchen Lin
Sida I. Wang
Xi Lin
Robin Jia
Lin Xiao
Xiang Ren
Wen-tau Yih
CLL
68
31
0
04 May 2022
OPT: Open Pre-trained Transformer Language Models
OPT: Open Pre-trained Transformer Language Models
Susan Zhang
Stephen Roller
Naman Goyal
Mikel Artetxe
Moya Chen
...
Daniel Simig
Punit Singh Koura
Anjali Sridhar
Tianlu Wang
Luke Zettlemoyer
VLMOSLMAI4CE
364
3,700
0
02 May 2022
TemporalWiki: A Lifelong Benchmark for Training and Evaluating
  Ever-Evolving Language Models
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models
Joel Jang
Seonghyeon Ye
Changho Lee
Sohee Yang
Joongbo Shin
Janghoon Han
Gyeonghun Kim
Minjoon Seo
CLLKELM
112
98
0
29 Apr 2022
Towards Teachable Reasoning Systems: Using a Dynamic Memory of User
  Feedback for Continual System Improvement
Towards Teachable Reasoning Systems: Using a Dynamic Memory of User Feedback for Continual System Improvement
Bhavana Dalvi
Oyvind Tafjord
Peter Clark
LRMKELMReLM
91
39
0
27 Apr 2022
Plug-and-Play Adaptation for Continuously-updated QA
Plug-and-Play Adaptation for Continuously-updated QA
Kyungjae Lee
Wookje Han
Seung-won Hwang
Hwaran Lee
Joonsuk Park
Sang-Woo Lee
KELM
73
16
0
27 Apr 2022
A Review on Language Models as Knowledge Bases
A Review on Language Models as Knowledge Bases
Badr AlKhamissi
Millicent Li
Asli Celikyilmaz
Mona T. Diab
Marjan Ghazvininejad
KELM
86
186
0
12 Apr 2022
PaLM: Scaling Language Modeling with Pathways
PaLM: Scaling Language Modeling with Pathways
Aakanksha Chowdhery
Sharan Narang
Jacob Devlin
Maarten Bosma
Gaurav Mishra
...
Kathy Meier-Hellstern
Douglas Eck
J. Dean
Slav Petrov
Noah Fiedel
PILMLRM
535
6,301
0
05 Apr 2022
Teaching language models to support answers with verified quotes
Teaching language models to support answers with verified quotes
Jacob Menick
Maja Trebacz
Vladimir Mikulik
John Aslanides
Francis Song
...
Mia Glaese
Susannah Young
Lucy Campbell-Gillingham
G. Irving
Nat McAleese
ELMRALM
308
266
0
21 Mar 2022
Memorizing Transformers
Memorizing Transformers
Yuhuai Wu
M. Rabe
DeLesley S. Hutchins
Christian Szegedy
RALM
100
178
0
16 Mar 2022
ELLE: Efficient Lifelong Pre-training for Emerging Data
ELLE: Efficient Lifelong Pre-training for Emerging Data
Yujia Qin
Jiajie Zhang
Yankai Lin
Zhiyuan Liu
Peng Li
Maosong Sun
Jie Zhou
95
73
0
12 Mar 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLMALM
888
13,207
0
04 Mar 2022
A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models
A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models
Da Yin
Li Dong
Hao Cheng
Xiaodong Liu
Kai-Wei Chang
Furu Wei
Jianfeng Gao
KELM
71
34
0
17 Feb 2022
Previous
1234
Next