ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.07229
  4. Cited By
Mass-Editing Memory in a Transformer
v1v2 (latest)

Mass-Editing Memory in a Transformer

13 October 2022
Kevin Meng
Arnab Sen Sharma
A. Andonian
Yonatan Belinkov
David Bau
    KELMVLM
ArXiv (abs)PDFHTML

Papers citing "Mass-Editing Memory in a Transformer"

28 / 78 papers shown
Title
GPT-NeoX-20B: An Open-Source Autoregressive Language Model
GPT-NeoX-20B: An Open-Source Autoregressive Language Model
Sid Black
Stella Biderman
Eric Hallahan
Quentin G. Anthony
Leo Gao
...
Shivanshu Purohit
Laria Reynolds
J. Tow
Benqi Wang
Samuel Weinbach
180
835
0
14 Apr 2022
A Review on Language Models as Knowledge Bases
A Review on Language Models as Knowledge Bases
Badr AlKhamissi
Millicent Li
Asli Celikyilmaz
Mona T. Diab
Marjan Ghazvininejad
KELM
84
186
0
12 Apr 2022
PaLM: Scaling Language Modeling with Pathways
PaLM: Scaling Language Modeling with Pathways
Aakanksha Chowdhery
Sharan Narang
Jacob Devlin
Maarten Bosma
Gaurav Mishra
...
Kathy Meier-Hellstern
Douglas Eck
J. Dean
Slav Petrov
Noah Fiedel
PILMLRM
529
6,293
0
05 Apr 2022
Transformer Feed-Forward Layers Build Predictions by Promoting Concepts
  in the Vocabulary Space
Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space
Mor Geva
Avi Caciularu
Ke Wang
Yoav Goldberg
KELM
127
386
0
28 Mar 2022
Locating and Editing Factual Associations in GPT
Locating and Editing Factual Associations in GPT
Kevin Meng
David Bau
A. Andonian
Yonatan Belinkov
KELM
251
1,381
0
10 Feb 2022
Kformer: Knowledge Injection in Transformer Feed-Forward Layers
Kformer: Knowledge Injection in Transformer Feed-Forward Layers
Yunzhi Yao
Shaohan Huang
Li Dong
Furu Wei
Huajun Chen
Ningyu Zhang
KELMMedIm
78
42
0
15 Jan 2022
Do Language Models Have Beliefs? Methods for Detecting, Updating, and
  Visualizing Model Beliefs
Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs
Peter Hase
Mona T. Diab
Asli Celikyilmaz
Xian Li
Zornitsa Kozareva
Veselin Stoyanov
Joey Tianyi Zhou
Srini Iyer
KELMLRM
79
79
0
26 Nov 2021
Temporal Effects on Pre-trained Models for Language Processing Tasks
Temporal Effects on Pre-trained Models for Language Processing Tasks
Oshin Agarwal
A. Nenkova
VLM
88
57
0
24 Nov 2021
Fast Model Editing at Scale
Fast Model Editing at Scale
E. Mitchell
Charles Lin
Antoine Bosselut
Chelsea Finn
Christopher D. Manning
KELM
335
378
0
21 Oct 2021
Carbon Emissions and Large Neural Network Training
Carbon Emissions and Large Neural Network Training
David A. Patterson
Joseph E. Gonzalez
Quoc V. Le
Chen Liang
Lluís-Miquel Munguía
D. Rothchild
David R. So
Maud Texier
J. Dean
AI4CE
337
680
0
21 Apr 2021
Knowledge Neurons in Pretrained Transformers
Knowledge Neurons in Pretrained Transformers
Damai Dai
Li Dong
Y. Hao
Zhifang Sui
Baobao Chang
Furu Wei
KELMMU
97
463
0
18 Apr 2021
Editing Factual Knowledge in Language Models
Editing Factual Knowledge in Language Models
Nicola De Cao
Wilker Aziz
Ivan Titov
KELM
125
512
0
16 Apr 2021
Relational World Knowledge Representation in Contextual Language Models:
  A Review
Relational World Knowledge Representation in Contextual Language Models: A Review
Tara Safavi
Danai Koutra
KELM
82
51
0
12 Apr 2021
Mind the Gap: Assessing Temporal Generalization in Neural Language
  Models
Mind the Gap: Assessing Temporal Generalization in Neural Language Models
Angeliki Lazaridou
A. Kuncoro
E. Gribovskaya
Devang Agrawal
Adam Liska
...
Sebastian Ruder
Dani Yogatama
Kris Cao
Susannah Young
Phil Blunsom
VLM
130
218
0
03 Feb 2021
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
329
368
0
01 Feb 2021
Transformer Feed-Forward Layers Are Key-Value Memories
Transformer Feed-Forward Layers Are Key-Value Memories
Mor Geva
R. Schuster
Jonathan Berant
Omer Levy
KELM
170
843
0
29 Dec 2020
Modifying Memories in Transformer Models
Modifying Memories in Transformer Models
Chen Zhu
A. S. Rawat
Manzil Zaheer
Srinadh Bhojanapalli
Daliang Li
Felix X. Yu
Sanjiv Kumar
KELM
115
203
0
01 Dec 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
877
42,379
0
28 May 2020
How Context Affects Language Models' Factual Predictions
How Context Affects Language Models' Factual Predictions
Fabio Petroni
Patrick Lewis
Aleksandra Piktus
Tim Rocktaschel
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
59
239
0
10 May 2020
Editable Neural Networks
Editable Neural Networks
A. Sinitsin
Vsevolod Plokhotnyuk
Dmitriy V. Pyrkin
Sergei Popov
Artem Babenko
KELM
113
182
0
01 Apr 2020
How Much Knowledge Can You Pack Into the Parameters of a Language Model?
How Much Knowledge Can You Pack Into the Parameters of a Language Model?
Adam Roberts
Colin Raffel
Noam M. Shazeer
KELM
130
893
0
10 Feb 2020
PyTorch: An Imperative Style, High-Performance Deep Learning Library
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
544
42,591
0
03 Dec 2019
How Can We Know What Language Models Know?
How Can We Know What Language Models Know?
Zhengbao Jiang
Frank F. Xu
Jun Araki
Graham Neubig
KELM
144
1,409
0
28 Nov 2019
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELMAI4MH
576
2,674
0
03 Sep 2019
COMET: Commonsense Transformers for Automatic Knowledge Graph
  Construction
COMET: Commonsense Transformers for Automatic Knowledge Graph Construction
Antoine Bosselut
Hannah Rashkin
Maarten Sap
Chaitanya Malaviya
Asli Celikyilmaz
Yejin Choi
82
912
0
12 Jun 2019
Zero-Shot Relation Extraction via Reading Comprehension
Zero-Shot Relation Extraction via Reading Comprehension
Omer Levy
Minjoon Seo
Eunsol Choi
Luke Zettlemoyer
ReLM
79
699
0
13 Jun 2017
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
786
132,363
0
12 Jun 2017
Direct and Indirect Effects
Direct and Indirect Effects
Judea Pearl
CML
97
2,176
0
10 Jan 2013
Previous
12