Chain of Hindsight Aligns Language Models with Feedback
- ALM
Learning from human preferences is important for language models to be helpful and useful for humans, and to align with human and social values. Prior work have achieved remarkable successes by learning from human feedback to understand and follow instructions. They belong to two categories supervised finetuning and RLHF. Supervised finetuning is based on curated model generations that are preferred by human labelers, a key limitation of them is that supervised finetuning cannot learn from negative ratings; models are only trained on positive feedback, which makes it data inefficient and difficult to generalize. While RLHF can learn from all feedback by learning a reward function and RL optimization, it suffers from imperfect reward function and RL is very hard to tune. In this work, we propose a novel technique that addresses the limitations of both supervised finetuning and RLHF, our method, Chain of Hindsight, aligns language models with all feedback without using reinforcement learning. Our idea is motivated by how humans learn from hindsight experience, and we turn all feedback into a sentence to finetune model in order to leverage the language understanding abilities of language models. We condition the model on a sequence of model generations paired with hindsight feedback, and finetune the model to predict the most preferred output. By doing so, models can learn to identify and correct negative attributes or errors. Applying our method to GPT-J, we observe that it substantially outperforms both supervised finetuning and RLHF on summarization and dialogue tasks and is significantly more preferred in human evaluations.
View on arXiv