ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.14913
  4. Cited By
Transformer Feed-Forward Layers Are Key-Value Memories

Transformer Feed-Forward Layers Are Key-Value Memories

29 December 2020
Mor Geva
R. Schuster
Jonathan Berant
Omer Levy
    KELM
ArXivPDFHTML

Papers citing "Transformer Feed-Forward Layers Are Key-Value Memories"

50 / 151 papers shown
Title
Identifying Linear Relational Concepts in Large Language Models
Identifying Linear Relational Concepts in Large Language Models
David Chanin
Anthony Hunter
Oana-Maria Camburu
LLMSV
KELM
18
4
0
15 Nov 2023
Codebook Features: Sparse and Discrete Interpretability for Neural
  Networks
Codebook Features: Sparse and Discrete Interpretability for Neural Networks
Alex Tamkin
Mohammad Taufeeque
Noah D. Goodman
32
27
0
26 Oct 2023
Investigating semantic subspaces of Transformer sentence embeddings
  through linear structural probing
Investigating semantic subspaces of Transformer sentence embeddings through linear structural probing
Dmitry Nikolaev
Sebastian Padó
46
5
0
18 Oct 2023
ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question
  Answering with Fine-tuned Large Language Models
ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models
Haoran Luo
E. Haihong
Zichen Tang
Shiyao Peng
Yikai Guo
...
Guanting Dong
Meina Song
Wei Lin
Yifan Zhu
Luu Anh Tuan
RALM
31
38
0
13 Oct 2023
Why bother with geometry? On the relevance of linear decompositions of
  Transformer embeddings
Why bother with geometry? On the relevance of linear decompositions of Transformer embeddings
Timothee Mickus
Raúl Vázquez
25
2
0
10 Oct 2023
Towards Best Practices of Activation Patching in Language Models:
  Metrics and Methods
Towards Best Practices of Activation Patching in Language Models: Metrics and Methods
Fred Zhang
Neel Nanda
LLMSV
36
97
0
27 Sep 2023
Knowledge Sanitization of Large Language Models
Knowledge Sanitization of Large Language Models
Yoichi Ishibashi
Hidetoshi Shimodaira
KELM
33
19
0
21 Sep 2023
PMET: Precise Model Editing in a Transformer
PMET: Precise Model Editing in a Transformer
Xiaopeng Li
Shasha Li
Shezheng Song
Jing Yang
Jun Ma
Jie Yu
KELM
31
115
0
17 Aug 2023
Evaluating the Ripple Effects of Knowledge Editing in Language Models
Evaluating the Ripple Effects of Knowledge Editing in Language Models
Roi Cohen
Eden Biran
Ori Yoran
Amir Globerson
Mor Geva
KELM
42
155
0
24 Jul 2023
SelfEvolve: A Code Evolution Framework via Large Language Models
SelfEvolve: A Code Evolution Framework via Large Language Models
Shuyang Jiang
Yuhao Wang
Yu Wang
21
32
0
05 Jun 2023
Memorization Capacity of Multi-Head Attention in Transformers
Memorization Capacity of Multi-Head Attention in Transformers
Sadegh Mahdavi
Renjie Liao
Christos Thrampoulidis
26
22
0
03 Jun 2023
Editing Common Sense in Transformers
Editing Common Sense in Transformers
Anshita Gupta
Debanjan Mondal
Akshay Krishna Sheshadri
Wenlong Zhao
Xiang Lorraine Li
Sarah Wiegreffe
Niket Tandon
KELM
41
22
0
24 May 2023
Explaining How Transformers Use Context to Build Predictions
Explaining How Transformers Use Context to Build Predictions
Javier Ferrando
Gerard I. Gállego
Ioannis Tsiamas
Marta R. Costa-jussá
32
31
0
21 May 2023
Token-wise Decomposition of Autoregressive Language Model Hidden States
  for Analyzing Model Predictions
Token-wise Decomposition of Autoregressive Language Model Hidden States for Analyzing Model Predictions
Byung-Doh Oh
William Schuler
29
2
0
17 May 2023
N2G: A Scalable Approach for Quantifying Interpretable Neuron
  Representations in Large Language Models
N2G: A Scalable Approach for Quantifying Interpretable Neuron Representations in Large Language Models
Alex Foote
Neel Nanda
Esben Kran
Ionnis Konstas
Fazl Barez
MILM
28
2
0
22 Apr 2023
Computational modeling of semantic change
Computational modeling of semantic change
Nina Tahmasebi
Haim Dubossarsky
34
6
0
13 Apr 2023
Factorizers for Distributed Sparse Block Codes
Factorizers for Distributed Sparse Block Codes
Michael Hersche
Aleksandar Terzić
G. Karunaratne
Jovin Langenegger
Angeline Pouget
G. Cherubini
Luca Benini
Abu Sebastian
Abbas Rahimi
39
4
0
24 Mar 2023
Eliciting Latent Predictions from Transformers with the Tuned Lens
Eliciting Latent Predictions from Transformers with the Tuned Lens
Nora Belrose
Zach Furman
Logan Smith
Danny Halawi
Igor V. Ostrovsky
Lev McKinney
Stella Biderman
Jacob Steinhardt
22
193
0
14 Mar 2023
LabelPrompt: Effective Prompt-based Learning for Relation Classification
LabelPrompt: Effective Prompt-based Learning for Relation Classification
Wenbo Zhang
Xiaoning Song
Zhenhua Feng
Tianyang Xu
Xiaojun Wu
VLM
35
4
0
16 Feb 2023
A Study on ReLU and Softmax in Transformer
A Study on ReLU and Softmax in Transformer
Kai Shen
Junliang Guo
Xuejiao Tan
Siliang Tang
Rui Wang
Jiang Bian
19
53
0
13 Feb 2023
Interpretability in Activation Space Analysis of Transformers: A Focused
  Survey
Interpretability in Activation Space Analysis of Transformers: A Focused Survey
Soniya Vijayakumar
AI4CE
32
3
0
22 Jan 2023
Does Localization Inform Editing? Surprising Differences in
  Causality-Based Localization vs. Knowledge Editing in Language Models
Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models
Peter Hase
Joey Tianyi Zhou
Been Kim
Asma Ghandeharioun
MILM
36
167
0
10 Jan 2023
Black-box language model explanation by context length probing
Black-box language model explanation by context length probing
Ondřej Cífka
Antoine Liutkus
MILM
LRM
16
6
0
30 Dec 2022
Rank-One Editing of Encoder-Decoder Models
Rank-One Editing of Encoder-Decoder Models
Vikas Raunak
Arul Menezes
KELM
26
10
0
23 Nov 2022
Interpreting Neural Networks through the Polytope Lens
Interpreting Neural Networks through the Polytope Lens
Sid Black
Lee D. Sharkey
Léo Grinsztajn
Eric Winsor
Daniel A. Braun
...
Kip Parker
Carlos Ramón Guevara
Beren Millidge
Gabriel Alfour
Connor Leahy
FAtt
MILM
31
22
0
22 Nov 2022
Finding Skill Neurons in Pre-trained Transformer-based Language Models
Finding Skill Neurons in Pre-trained Transformer-based Language Models
Xiaozhi Wang
Kaiyue Wen
Zhengyan Zhang
Lei Hou
Zhiyuan Liu
Juanzi Li
MILM
MoE
24
50
0
14 Nov 2022
Large Language Models with Controllable Working Memory
Large Language Models with Controllable Working Memory
Daliang Li
A. S. Rawat
Manzil Zaheer
Xin Wang
Michal Lukasik
Andreas Veit
Felix X. Yu
Surinder Kumar
KELM
52
152
0
09 Nov 2022
Interpretability in the Wild: a Circuit for Indirect Object
  Identification in GPT-2 small
Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small
Kevin Wang
Alexandre Variengien
Arthur Conmy
Buck Shlegeris
Jacob Steinhardt
212
496
0
01 Nov 2022
Parameter-Efficient Tuning Makes a Good Classification Head
Parameter-Efficient Tuning Makes a Good Classification Head
Zhuoyi Yang
Ming Ding
Yanhui Guo
Qingsong Lv
Jie Tang
VLM
40
14
0
30 Oct 2022
Mass-Editing Memory in a Transformer
Mass-Editing Memory in a Transformer
Kevin Meng
Arnab Sen Sharma
A. Andonian
Yonatan Belinkov
David Bau
KELM
VLM
33
525
0
13 Oct 2022
Decoupled Context Processing for Context Augmented Language Modeling
Decoupled Context Processing for Context Augmented Language Modeling
Zonglin Li
Ruiqi Guo
Surinder Kumar
RALM
KELM
19
23
0
11 Oct 2022
Understanding Transformer Memorization Recall Through Idioms
Understanding Transformer Memorization Recall Through Idioms
Adi Haviv
Ido Cohen
Jacob Gidron
R. Schuster
Yoav Goldberg
Mor Geva
28
48
0
07 Oct 2022
Calibrating Factual Knowledge in Pretrained Language Models
Calibrating Factual Knowledge in Pretrained Language Models
Qingxiu Dong
Damai Dai
Yifan Song
Jingjing Xu
Zhifang Sui
Lei Li
KELM
238
82
0
07 Oct 2022
Analyzing Transformers in Embedding Space
Analyzing Transformers in Embedding Space
Guy Dar
Mor Geva
Ankit Gupta
Jonathan Berant
19
83
0
06 Sep 2022
How to Dissect a Muppet: The Structure of Transformer Embedding Spaces
How to Dissect a Muppet: The Structure of Transformer Embedding Spaces
Timothee Mickus
Denis Paperno
Mathieu Constant
19
19
0
07 Jun 2022
Language Anisotropic Cross-Lingual Model Editing
Language Anisotropic Cross-Lingual Model Editing
Yang Xu
Yutai Hou
Wanxiang Che
Min Zhang
KELM
94
24
0
25 May 2022
Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge
  Graph Completion
Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion
Xiang Chen
Ningyu Zhang
Lei Li
Shumin Deng
Chuanqi Tan
Changliang Xu
Fei Huang
Luo Si
Huajun Chen
18
126
0
04 May 2022
Finding patterns in Knowledge Attribution for Transformers
Finding patterns in Knowledge Attribution for Transformers
Jeevesh Juneja
Ritu Agarwal
KELM
11
1
0
03 May 2022
Plug-and-Play Adaptation for Continuously-updated QA
Plug-and-Play Adaptation for Continuously-updated QA
Kyungjae Lee
Wookje Han
Seung-won Hwang
Hwaran Lee
Joonsuk Park
Sang-Woo Lee
KELM
22
16
0
27 Apr 2022
Monarch: Expressive Structured Matrices for Efficient and Accurate
  Training
Monarch: Expressive Structured Matrices for Efficient and Accurate Training
Tri Dao
Beidi Chen
N. Sohoni
Arjun D Desai
Michael Poli
Jessica Grogan
Alexander Liu
Aniruddh Rao
Atri Rudra
Christopher Ré
22
87
0
01 Apr 2022
Transformer Feed-Forward Layers Build Predictions by Promoting Concepts
  in the Vocabulary Space
Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space
Mor Geva
Avi Caciularu
Ke Wang
Yoav Goldberg
KELM
46
336
0
28 Mar 2022
The Dual Form of Neural Networks Revisited: Connecting Test Time
  Predictions to Training Patterns via Spotlights of Attention
The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns via Spotlights of Attention
Kazuki Irie
Róbert Csordás
Jürgen Schmidhuber
14
42
0
11 Feb 2022
Locating and Editing Factual Associations in GPT
Locating and Editing Factual Associations in GPT
Kevin Meng
David Bau
A. Andonian
Yonatan Belinkov
KELM
56
1,192
0
10 Feb 2022
Generative Modeling of Complex Data
Generative Modeling of Complex Data
Luca Canale
Nicolas Grislain
Grégoire Lothe
Johanne Leduc
SyDa
18
4
0
04 Feb 2022
Kformer: Knowledge Injection in Transformer Feed-Forward Layers
Kformer: Knowledge Injection in Transformer Feed-Forward Layers
Yunzhi Yao
Shaohan Huang
Li Dong
Furu Wei
Huajun Chen
Ningyu Zhang
KELM
MedIm
23
42
0
15 Jan 2022
On Transferability of Prompt Tuning for Natural Language Processing
On Transferability of Prompt Tuning for Natural Language Processing
Yusheng Su
Xiaozhi Wang
Yujia Qin
Chi-Min Chan
Yankai Lin
...
Peng Li
Juanzi Li
Lei Hou
Maosong Sun
Jie Zhou
AAML
VLM
23
98
0
12 Nov 2021
Fast Model Editing at Scale
Fast Model Editing at Scale
E. Mitchell
Charles Lin
Antoine Bosselut
Chelsea Finn
Christopher D. Manning
KELM
230
343
0
21 Oct 2021
Towards a Unified View of Parameter-Efficient Transfer Learning
Towards a Unified View of Parameter-Efficient Transfer Learning
Junxian He
Chunting Zhou
Xuezhe Ma
Taylor Berg-Kirkpatrick
Graham Neubig
AAML
26
893
0
08 Oct 2021
MoEfication: Transformer Feed-forward Layers are Mixtures of Experts
MoEfication: Transformer Feed-forward Layers are Mixtures of Experts
Zhengyan Zhang
Yankai Lin
Zhiyuan Liu
Peng Li
Maosong Sun
Jie Zhou
MoE
27
117
0
05 Oct 2021
Knowledge Neurons in Pretrained Transformers
Knowledge Neurons in Pretrained Transformers
Damai Dai
Li Dong
Y. Hao
Zhifang Sui
Baobao Chang
Furu Wei
KELM
MU
14
415
0
18 Apr 2021
Previous
1234
Next