ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.14753
  4. Cited By
Prompting a Pretrained Transformer Can Be a Universal Approximator

Prompting a Pretrained Transformer Can Be a Universal Approximator

22 February 2024
Aleksandar Petrov
Philip Torr
Adel Bibi
ArXivPDFHTML

Papers citing "Prompting a Pretrained Transformer Can Be a Universal Approximator"

4 / 4 papers shown
Title
Expressivity of Neural Networks with Random Weights and Learned Biases
Expressivity of Neural Networks with Random Weights and Learned Biases
Ezekiel Williams
Avery Hee-Woon Ryoo
Thomas Jiralerspong
Alexandre Payeur
M. Perich
Luca Mazzucato
Guillaume Lajoie
36
2
0
01 Jul 2024
On the Expressivity Role of LayerNorm in Transformers' Attention
On the Expressivity Role of LayerNorm in Transformers' Attention
Shaked Brody
Shiyu Jin
Xinghao Zhu
MoE
66
30
0
04 May 2023
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
328
4,077
0
24 May 2022
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
1