ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.14969
71
0

Lost in Space: Optimizing Tokens for Grammar-Constrained Decoding

24 February 2025
Sil Hamilton
David Mimno
ArXivPDFHTML
Abstract

General-purpose language models are trained to produce varied natural language outputs, but for some tasks like annotation or classification we need more specific output formats. LLM systems increasingly support structured output, sampling tokens according to a grammar, which enforces a format but which can also reduce performance. We ask whether there are systematic differences between grammars that appear semantically similar to humans. To answer this question, we test four popular model families with five token formats on four NLP benchmarks. All models perform most accurately when instructed to classify with real numbers. Performance also improves by 5%-10% when models are instructed to return tokens incorporating leading whitespace, which we find can help models avoid structural deficiencies in subword token representations. Format-based differences are largest for smaller models that are often used for local laptop-scale inference. We present best practices for researchers using language models as zero-shot classifiers with structured output.

View on arXiv
@article{hamilton2025_2502.14969,
  title={ Lost in Space: Optimizing Tokens for Grammar-Constrained Decoding },
  author={ Sil Hamilton and David Mimno },
  journal={arXiv preprint arXiv:2502.14969},
  year={ 2025 }
}
Comments on this paper