ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17010
  4. Cited By
Understanding Prompt Tuning and In-Context Learning via Meta-Learning

Understanding Prompt Tuning and In-Context Learning via Meta-Learning

22 May 2025
Tim Genewein
Kevin Wenliang Li
Jordi Grau-Moya
Anian Ruoss
Laurent Orseau
Marcus Hutter
    VPVLM
ArXivPDFHTML

Papers citing "Understanding Prompt Tuning and In-Context Learning via Meta-Learning"

18 / 18 papers shown
Title
On the generalization of language models from in-context learning and finetuning: a controlled study
On the generalization of language models from in-context learning and finetuning: a controlled study
Andrew Kyle Lampinen
Arslan Chaudhry
Stephanie Chan
Cody Wild
Diane Wan
Alex Ku
Jorg Bornschein
Razvan Pascanu
Murray Shanahan
James L. McClelland
80
3
0
01 May 2025
Towards Interpretable Soft Prompts
Towards Interpretable Soft Prompts
Oam Patel
Jason Wang
Nikhil Shivakumar Nayak
Suraj Srinivas
Himabindu Lakkaraju
47
1
0
02 Apr 2025
Strategy Coopetition Explains the Emergence and Transience of In-Context Learning
Aaditya K. Singh
Ted Moskovitz
Sara Dragutinovic
Felix Hill
Stephanie C. Y. Chan
Andrew Saxe
316
2
0
07 Mar 2025
BALROG: Benchmarking Agentic LLM and VLM Reasoning On Games
BALROG: Benchmarking Agentic LLM and VLM Reasoning On Games
Davide Paglieri
Bartłomiej Cupiał
Samuel Coward
Ulyana Piterbarg
Maciej Wolczyk
...
Lerrel Pinto
Rob Fergus
Jakob Foerster
Jack Parker-Holder
Tim Rocktaschel
LLMAG
LRM
150
16
0
20 Nov 2024
Toward Understanding In-context vs. In-weight Learning
Toward Understanding In-context vs. In-weight Learning
Bryan Chan
Xinyi Chen
András Gyorgy
Dale Schuurmans
90
5
0
30 Oct 2024
Compression via Pre-trained Transformers: A Study on Byte-Level Multimodal Data
Compression via Pre-trained Transformers: A Study on Byte-Level Multimodal Data
David Heurtel-Depeiges
Anian Ruoss
J. Veness
Tim Genewein
80
2
0
07 Oct 2024
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Zeyu Han
Chao Gao
Jinyang Liu
Jeff Zhang
Sai Qian Zhang
179
345
0
21 Mar 2024
The Transient Nature of Emergent In-Context Learning in Transformers
The Transient Nature of Emergent In-Context Learning in Transformers
Aaditya K. Singh
Stephanie C. Y. Chan
Ted Moskovitz
Erin Grant
Andrew M. Saxe
Felix Hill
84
37
0
14 Nov 2023
Black-box Prompt Tuning with Subspace Learning
Black-box Prompt Tuning with Subspace Learning
Yuanhang Zheng
Zhixing Tan
Peng Li
Yang Liu
VLM
81
11
0
04 May 2023
Memory-Based Meta-Learning on Non-Stationary Distributions
Memory-Based Meta-Learning on Non-Stationary Distributions
Tim Genewein
Grégoire Delétang
Anian Ruoss
L. Wenliang
Elliot Catt
Vincent Dutordoir
Jordi Grau-Moya
Laurent Orseau
Marcus Hutter
J. Veness
BDL
44
12
0
06 Feb 2023
General-Purpose In-Context Learning by Meta-Learning Transformers
General-Purpose In-Context Learning by Meta-Learning Transformers
Louis Kirsch
James Harrison
Jascha Narain Sohl-Dickstein
Luke Metz
77
74
0
08 Dec 2022
What Can Transformers Learn In-Context? A Case Study of Simple Function
  Classes
What Can Transformers Learn In-Context? A Case Study of Simple Function Classes
Shivam Garg
Dimitris Tsipras
Percy Liang
Gregory Valiant
90
479
0
01 Aug 2022
Transformers Can Do Bayesian Inference
Transformers Can Do Bayesian Inference
Samuel G. Müller
Noah Hollmann
Sebastian Pineda Arango
Josif Grabocka
Frank Hutter
BDL
UQCV
55
160
0
20 Dec 2021
An Explanation of In-context Learning as Implicit Bayesian Inference
An Explanation of In-context Learning as Implicit Bayesian Inference
Sang Michael Xie
Aditi Raghunathan
Percy Liang
Tengyu Ma
ReLM
BDL
VPVLM
LRM
119
728
0
03 Nov 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
416
3,952
0
18 Apr 2021
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Xiang Lisa Li
Percy Liang
140
4,167
0
01 Jan 2021
Learning to reinforcement learn
Learning to reinforcement learn
Jane X. Wang
Z. Kurth-Nelson
Dhruva Tirumala
Hubert Soyer
Joel Z Leibo
Rémi Munos
Charles Blundell
D. Kumaran
M. Botvinick
OffRL
54
974
0
17 Nov 2016
A Philosophical Treatise of Universal Induction
A Philosophical Treatise of Universal Induction
Samuel Rathmanner
Marcus Hutter
AI4CE
47
114
0
28 May 2011
1