Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2410.01508
Cited By
Disentangling Latent Shifts of In-Context Learning Through Self-Training
2 October 2024
Josip Jukić
Jan Snajder
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Disentangling Latent Shifts of In-Context Learning Through Self-Training"
10 / 10 papers shown
Title
In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering
Sheng Liu
Haotian Ye
Lei Xing
James Y. Zou
79
110
0
11 Nov 2023
Function Vectors in Large Language Models
Eric Todd
Millicent Li
Arnab Sen Sharma
Aaron Mueller
Byron C. Wallace
David Bau
48
114
0
23 Oct 2023
Editing Models with Task Arithmetic
Gabriel Ilharco
Marco Tulio Ribeiro
Mitchell Wortsman
Suchin Gururangan
Ludwig Schmidt
Hannaneh Hajishirzi
Ali Farhadi
KELM
MoMe
MU
182
493
0
08 Dec 2022
Large Language Models Can Self-Improve
Jiaxin Huang
S. Gu
Le Hou
Yuexin Wu
Xuezhi Wang
Hongkun Yu
Jiawei Han
ReLM
AI4MH
LRM
171
608
0
20 Oct 2022
Self-Training: A Survey
Massih-Reza Amini
Vasilii Feofanov
Loïc Pauletto
Lies Hadjadj
Emilie Devijver
Yury Maximov
SSL
91
103
0
24 Feb 2022
The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns via Spotlights of Attention
Kazuki Irie
Róbert Csordás
Jürgen Schmidhuber
57
44
0
11 Feb 2022
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
743
41,932
0
28 May 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
344
1,613
0
21 Jan 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
1.1K
7,154
0
20 Apr 2018
Neural Network with Unbounded Activation Functions is Universal Approximator
Sho Sonoda
Noboru Murata
63
335
0
14 May 2015
1