Title |
---|
![]() ReLU Wins: Discovering Efficient Activation Functions for Sparse
LLMs Zhengyan Zhang Yixin Song Guanghui Yu Xu Han Yankai Lin Chaojun Xiao Chenyang Song Zhiyuan Liu Zeyu Mi Maosong Sun |
![]() Distilling Step-by-Step! Outperforming Larger Language Models with Less
Training Data and Smaller Model Sizes Lokesh Nagalapatti Chun-Liang Li Chih-Kuan Yeh Hootan Nakhost Yasuhisa Fujii Alexander Ratner Ranjay Krishna Chen-Yu Lee Tomas Pfister |