ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.04275
  4. Cited By
In-Context Alignment: Chat with Vanilla Language Models Before
  Fine-Tuning

In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning

8 August 2023
Xiaochuang Han
ArXivPDFHTML

Papers citing "In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning"

17 / 17 papers shown
Title
In-context Learning vs. Instruction Tuning: The Case of Small and Multilingual Language Models
In-context Learning vs. Instruction Tuning: The Case of Small and Multilingual Language Models
David Ponce
Thierry Etchegoyhen
74
1
0
03 Mar 2025
SPICA: Retrieving Scenarios for Pluralistic In-Context Alignment
Quan Ze Chen
K. J. Kevin Feng
Chan Young Park
Amy X. Zhang
26
0
0
16 Nov 2024
Dynamic Rewarding with Prompt Optimization Enables Tuning-free
  Self-Alignment of Language Models
Dynamic Rewarding with Prompt Optimization Enables Tuning-free Self-Alignment of Language Models
Somanshu Singla
Zhen Wang
Tianyang Liu
Abdullah Ashfaq
Zhiting Hu
Eric P. Xing
30
0
0
13 Nov 2024
Compute-Constrained Data Selection
Compute-Constrained Data Selection
Junjie Oscar Yin
Alexander M. Rush
39
0
0
21 Oct 2024
Controllable Safety Alignment: Inference-Time Adaptation to Diverse Safety Requirements
Controllable Safety Alignment: Inference-Time Adaptation to Diverse Safety Requirements
Jingyu Zhang
Ahmed Elgohary
Ahmed Magooda
Daniel Khashabi
Benjamin Van Durme
138
2
0
11 Oct 2024
From Distributional to Overton Pluralism: Investigating Large Language Model Alignment
From Distributional to Overton Pluralism: Investigating Large Language Model Alignment
Thom Lake
Eunsol Choi
Greg Durrett
44
9
0
25 Jun 2024
How Far Can In-Context Alignment Go? Exploring the State of In-Context
  Alignment
How Far Can In-Context Alignment Go? Exploring the State of In-Context Alignment
Heyan Huang
Yinghao Li
Huashan Sun
Yu Bai
Yang Gao
50
3
0
17 Jun 2024
Is In-Context Learning Sufficient for Instruction Following in LLMs?
Is In-Context Learning Sufficient for Instruction Following in LLMs?
Hao Zhao
Maksym Andriushchenko
Francesco Croce
Nicolas Flammarion
67
11
0
30 May 2024
A Theoretical Understanding of Self-Correction through In-context
  Alignment
A Theoretical Understanding of Self-Correction through In-context Alignment
Yifei Wang
Yuyang Wu
Zeming Wei
Stefanie Jegelka
Yisen Wang
LRM
47
13
0
28 May 2024
Unveiling the Misuse Potential of Base Large Language Models via
  In-Context Learning
Unveiling the Misuse Potential of Base Large Language Models via In-Context Learning
Xiao Wang
Tianze Chen
Xianjun Yang
Qi Zhang
Xun Zhao
Dahua Lin
ELM
46
7
0
16 Apr 2024
On the Essence and Prospect: An Investigation of Alignment Approaches
  for Big Models
On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models
Xinpeng Wang
Shitong Duan
Xiaoyuan Yi
Jing Yao
Shanlin Zhou
Zhihua Wei
Peng Zhang
Dongkuan Xu
Maosong Sun
Xing Xie
OffRL
41
16
0
07 Mar 2024
Do Large Language Models Mirror Cognitive Language Processing?
Do Large Language Models Mirror Cognitive Language Processing?
Yuqi Ren
Renren Jin
Tongxuan Zhang
Deyi Xiong
50
4
0
28 Feb 2024
LESS: Selecting Influential Data for Targeted Instruction Tuning
LESS: Selecting Influential Data for Targeted Instruction Tuning
Mengzhou Xia
Sadhika Malladi
Suchin Gururangan
Sanjeev Arora
Danqi Chen
80
186
0
06 Feb 2024
Tuning Language Models by Proxy
Tuning Language Models by Proxy
Alisa Liu
Xiaochuang Han
Yizhong Wang
Yulia Tsvetkov
Yejin Choi
Noah A. Smith
ALM
38
44
0
16 Jan 2024
Human-Instruction-Free LLM Self-Alignment with Limited Samples
Human-Instruction-Free LLM Self-Alignment with Limited Samples
Hongyi Guo
Yuanshun Yao
Wei Shen
Jiaheng Wei
Xiaoying Zhang
Zhaoran Wang
Yang Liu
95
21
0
06 Jan 2024
The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context
  Learning
The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
Bill Yuchen Lin
Abhilasha Ravichander
Ximing Lu
Nouha Dziri
Melanie Sclar
Khyathi Raghavi Chandu
Chandra Bhagavatula
Yejin Choi
22
164
0
04 Dec 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
319
11,953
0
04 Mar 2022
1