
Self-Data Distillation for Recovering Quality in Pruned Large Language Models
Papers citing "Self-Data Distillation for Recovering Quality in Pruned Large Language Models"
50 / 54 papers shown
Title |
---|
![]() Llama 2: Open Foundation and Fine-Tuned Chat Models Hugo Touvron Louis Martin Kevin R. Stone Peter Albert Amjad Almahairi ...Sharan Narang Aurelien Rodriguez Robert Stojnic Sergey Edunov Thomas Scialom |