ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.14381
  4. Cited By
Editing Knowledge Representation of Language Model via Rephrased Prefix
  Prompts

Editing Knowledge Representation of Language Model via Rephrased Prefix Prompts

21 March 2024
Yuchen Cai
Ding Cao
Rongxi Guo
Yaqin Wen
Guiquan Liu
Enhong Chen
    KELM
ArXivPDFHTML

Papers citing "Editing Knowledge Representation of Language Model via Rephrased Prefix Prompts"

28 / 28 papers shown
Title
Massive Editing for Large Language Models via Meta Learning
Massive Editing for Large Language Models via Meta Learning
Chenmien Tan
Ge Zhang
Jie Fu
KELM
49
37
0
08 Nov 2023
Knowledge Editing for Large Language Models: A Survey
Knowledge Editing for Large Language Models: A Survey
Song Wang
Yaochen Zhu
Haochen Liu
Zaiyi Zheng
Chen Chen
Wenlin Yao
KELM
115
155
0
24 Oct 2023
Can We Edit Multimodal Large Language Models?
Can We Edit Multimodal Large Language Models?
Siyuan Cheng
Bo Tian
Qingbin Liu
Xi Chen
Yongheng Wang
Huajun Chen
Ningyu Zhang
MLLM
72
28
0
12 Oct 2023
PMET: Precise Model Editing in a Transformer
PMET: Precise Model Editing in a Transformer
Xiaopeng Li
Shasha Li
Shangwen Wang
Jing Yang
Jun Ma
Jie Yu
KELM
73
130
0
17 Aug 2023
Scaling Sentence Embeddings with Large Language Models
Scaling Sentence Embeddings with Large Language Models
Ting Jiang
Shaohan Huang
Zhongzhi Luan
Deqing Wang
Fuzhen Zhuang
LRM
66
44
0
31 Jul 2023
Self-contradictory Hallucinations of Large Language Models: Evaluation,
  Detection and Mitigation
Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation
Niels Mündler
Jingxuan He
Slobodan Jenko
Martin Vechev
HILM
57
113
0
25 May 2023
Editing Large Language Models: Problems, Methods, and Opportunities
Editing Large Language Models: Problems, Methods, and Opportunities
Yunzhi Yao
Peng Wang
Bo Tian
Shuyang Cheng
Zhoubo Li
Shumin Deng
Huajun Chen
Ningyu Zhang
KELM
67
304
0
22 May 2023
Can We Edit Factual Knowledge by In-Context Learning?
Can We Edit Factual Knowledge by In-Context Learning?
Ce Zheng
Lei Li
Qingxiu Dong
Yuxuan Fan
Zhiyong Wu
Jingjing Xu
Baobao Chang
KELM
63
210
0
22 May 2023
Can LMs Learn New Entities from Descriptions? Challenges in Propagating
  Injected Knowledge
Can LMs Learn New Entities from Descriptions? Challenges in Propagating Injected Knowledge
Yasumasa Onoe
Michael J.Q. Zhang
Shankar Padmanabhan
Greg Durrett
Eunsol Choi
KELM
235
78
0
02 May 2023
Dissecting Recall of Factual Associations in Auto-Regressive Language
  Models
Dissecting Recall of Factual Associations in Auto-Regressive Language Models
Mor Geva
Jasmijn Bastings
Katja Filippova
Amir Globerson
KELM
238
313
0
28 Apr 2023
A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on
  Reasoning, Hallucination, and Interactivity
A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Yejin Bang
Samuel Cahyawijaya
Nayeon Lee
Wenliang Dai
Dan Su
...
Tiezheng Yu
Willy Chung
Quyet V. Do
Yan Xu
Pascale Fung
ReLM
LRM
82
1,372
0
08 Feb 2023
Editing Language Model-based Knowledge Graph Embeddings
Editing Language Model-based Knowledge Graph Embeddings
Shuyang Cheng
Ningyu Zhang
Bo Tian
Feiyu Xiong
Wei Guo
Huajun Chen
KELM
80
26
0
25 Jan 2023
Why Can GPT Learn In-Context? Language Models Implicitly Perform
  Gradient Descent as Meta-Optimizers
Why Can GPT Learn In-Context? Language Models Implicitly Perform Gradient Descent as Meta-Optimizers
Damai Dai
Yutao Sun
Li Dong
Y. Hao
Shuming Ma
Zhifang Sui
Furu Wei
LRM
66
166
0
20 Dec 2022
Mass-Editing Memory in a Transformer
Mass-Editing Memory in a Transformer
Kevin Meng
Arnab Sen Sharma
A. Andonian
Yonatan Belinkov
David Bau
KELM
VLM
105
583
0
13 Oct 2022
Transformer Feed-Forward Layers Build Predictions by Promoting Concepts
  in the Vocabulary Space
Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space
Mor Geva
Avi Caciularu
Ke Wang
Yoav Goldberg
KELM
95
379
0
28 Mar 2022
Locating and Editing Factual Associations in GPT
Locating and Editing Factual Associations in GPT
Kevin Meng
David Bau
A. Andonian
Yonatan Belinkov
KELM
224
1,344
0
10 Feb 2022
Black-Box Tuning for Language-Model-as-a-Service
Black-Box Tuning for Language-Model-as-a-Service
Tianxiang Sun
Yunfan Shao
Hong Qian
Xuanjing Huang
Xipeng Qiu
VLM
127
267
0
10 Jan 2022
Fast Model Editing at Scale
Fast Model Editing at Scale
E. Mitchell
Charles Lin
Antoine Bosselut
Chelsea Finn
Christopher D. Manning
KELM
322
366
0
21 Oct 2021
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods
  in Natural Language Processing
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLM
SyDa
187
3,961
0
28 Jul 2021
SimCSE: Simple Contrastive Learning of Sentence Embeddings
SimCSE: Simple Contrastive Learning of Sentence Embeddings
Tianyu Gao
Xingcheng Yao
Danqi Chen
AILaw
SSL
255
3,386
0
18 Apr 2021
Editing Factual Knowledge in Language Models
Editing Factual Knowledge in Language Models
Nicola De Cao
Wilker Aziz
Ivan Titov
KELM
106
504
0
16 Apr 2021
GPT Understands, Too
GPT Understands, Too
Xiao Liu
Yanan Zheng
Zhengxiao Du
Ming Ding
Yujie Qian
Zhilin Yang
Jie Tang
VLM
155
1,173
0
18 Mar 2021
Transformer Feed-Forward Layers Are Key-Value Memories
Transformer Feed-Forward Layers Are Key-Value Memories
Mor Geva
R. Schuster
Jonathan Berant
Omer Levy
KELM
130
827
0
29 Dec 2020
Modifying Memories in Transformer Models
Modifying Memories in Transformer Models
Chen Zhu
A. S. Rawat
Manzil Zaheer
Srinadh Bhojanapalli
Daliang Li
Felix X. Yu
Sanjiv Kumar
KELM
96
203
0
01 Dec 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
708
41,894
0
28 May 2020
Bias in Bios: A Case Study of Semantic Representation Bias in a
  High-Stakes Setting
Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting
Maria De-Arteaga
Alexey Romanov
Hanna M. Wallach
J. Chayes
C. Borgs
Alexandra Chouldechova
S. Geyik
K. Kenthapadi
Adam Tauman Kalai
167
455
0
27 Jan 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.7K
94,729
0
11 Oct 2018
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
658
131,414
0
12 Jun 2017
1