
Learning to Maximize Mutual Information for Chain-of-Thought Distillation
Papers citing "Learning to Maximize Mutual Information for Chain-of-Thought Distillation"
46 / 46 papers shown
Title |
---|
![]() Distilling Step-by-Step! Outperforming Larger Language Models with Less
Training Data and Smaller Model Sizes Lokesh Nagalapatti Chun-Liang Li Chih-Kuan Yeh Hootan Nakhost Yasuhisa Fujii Alexander Ratner Ranjay Krishna Chen-Yu Lee Tomas Pfister |