ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.16844
  4. Cited By
Think Big, Generate Quick: LLM-to-SLM for Fast Autoregressive Decoding
v1v2v3 (latest)

Think Big, Generate Quick: LLM-to-SLM for Fast Autoregressive Decoding

26 February 2024
Benjamin Bergner
Andrii Skliar
Amelie Royer
Tijmen Blankevoort
Yuki Markus Asano
B. Bejnordi
ArXiv (abs)PDFHTML

Papers citing "Think Big, Generate Quick: LLM-to-SLM for Fast Autoregressive Decoding"

3 / 3 papers shown
Title
Advancing Decoding Strategies: Enhancements in Locally Typical Sampling for LLMs
Advancing Decoding Strategies: Enhancements in Locally Typical Sampling for LLMs
Jaydip Sen
Saptarshi Sengupta
S. Dasgupta
52
0
0
03 Jun 2025
BERTtime Stories: Investigating the Role of Synthetic Story Data in Language Pre-training
BERTtime Stories: Investigating the Role of Synthetic Story Data in Language Pre-training
Nikitas Theodoropoulos
Giorgos Filandrianos
Vassilis Lyberatos
Maria Lymperaiou
Giorgos Stamou
SyDa
219
1
0
24 Feb 2025
Exploring Gen-AI applications in building research and industry: A review
Exploring Gen-AI applications in building research and industry: A review
Hanlong Wan
Jian Zhang
Yan Chen
Weili Xu
Fan Feng
AI4CE
127
3
0
01 Oct 2024
1