Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.16844
Cited By
v1
v2
v3 (latest)
Think Big, Generate Quick: LLM-to-SLM for Fast Autoregressive Decoding
26 February 2024
Benjamin Bergner
Andrii Skliar
Amelie Royer
Tijmen Blankevoort
Yuki Markus Asano
B. Bejnordi
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Think Big, Generate Quick: LLM-to-SLM for Fast Autoregressive Decoding"
3 / 3 papers shown
Title
Advancing Decoding Strategies: Enhancements in Locally Typical Sampling for LLMs
Jaydip Sen
Saptarshi Sengupta
S. Dasgupta
52
0
0
03 Jun 2025
BERTtime Stories: Investigating the Role of Synthetic Story Data in Language Pre-training
Nikitas Theodoropoulos
Giorgos Filandrianos
Vassilis Lyberatos
Maria Lymperaiou
Giorgos Stamou
SyDa
219
1
0
24 Feb 2025
Exploring Gen-AI applications in building research and industry: A review
Hanlong Wan
Jian Zhang
Yan Chen
Weili Xu
Fan Feng
AI4CE
127
3
0
01 Oct 2024
1