24
0

Advancing Decoding Strategies: Enhancements in Locally Typical Sampling for LLMs

Main:39 Pages
7 Tables
Abstract

This chapter explores advancements in decoding strategies for large language models (LLMs), focusing on enhancing the Locally Typical Sampling (LTS) algorithm. Traditional decoding methods, such as top-k and nucleus sampling, often struggle to balance fluency, diversity, and coherence in text generation. To address these challenges, Adaptive Semantic-Aware Typicality Sampling (ASTS) is proposed as an improved version of LTS, incorporating dynamic entropy thresholding, multi-objective scoring, and reward-penalty adjustments. ASTS ensures contextually coherent and diverse text generation while maintaining computational efficiency. Its performance is evaluated across multiple benchmarks, including story generation and abstractive summarization, using metrics such as perplexity, MAUVE, and diversity scores. Experimental results demonstrate that ASTS outperforms existing sampling techniques by reducing repetition, enhancing semantic alignment, and improving fluency.

View on arXiv
@article{sen2025_2506.05387,
  title={ Advancing Decoding Strategies: Enhancements in Locally Typical Sampling for LLMs },
  author={ Jaydip Sen and Saptarshi Sengupta and Subhasis Dasgupta },
  journal={arXiv preprint arXiv:2506.05387},
  year={ 2025 }
}
Comments on this paper