Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.14753
Cited By
Prompting a Pretrained Transformer Can Be a Universal Approximator
22 February 2024
Aleksandar Petrov
Philip Torr
Adel Bibi
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Prompting a Pretrained Transformer Can Be a Universal Approximator"
4 / 4 papers shown
Title
Expressivity of Neural Networks with Random Weights and Learned Biases
Ezekiel Williams
Avery Hee-Woon Ryoo
Thomas Jiralerspong
Alexandre Payeur
M. Perich
Luca Mazzucato
Guillaume Lajoie
36
2
0
01 Jul 2024
On the Expressivity Role of LayerNorm in Transformers' Attention
Shaked Brody
Shiyu Jin
Xinghao Zhu
MoE
66
30
0
04 May 2023
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
328
4,077
0
24 May 2022
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
1