Uncovering Intention through LLM-Driven Code Snippet Description Generation

Documenting code snippets is essential to pinpoint key areas where both developers and users should pay attention. Examples include usage examples and other Application Programming Interfaces (APIs), which are especially important for third-party libraries. With the rise of Large Language Models (LLMs), the key goal is to investigate the kinds of description developers commonly use and evaluate how well an LLM, in this case Llama, can support description generation. We use NPM Code Snippets, consisting of 185,412 packages with 1,024,579 code snippets. From there, we use 400 code snippets (and their descriptions) as samples. First, our manual classification found that the majority of original descriptions (55.5%) highlight example-based usage. This finding emphasizes the importance of clear documentation, as some descriptions lacked sufficient detail to convey intent. Second, the LLM correctly identified the majority of original descriptions as "Example" (79.75%), which is identical to our manual finding, showing a propensity for generalization. Third, compared to the originals, the produced description had an average similarity score of 0.7173, suggesting relevance but room for improvement. Scores below 0.9 indicate some irrelevance. Our results show that depending on the task of the code snippet, the intention of the document may differ from being instructions for usage, installations, or descriptive learning examples for any user of a library.
View on arXiv@article{nugroho2025_2506.15453, title={ Uncovering Intention through LLM-Driven Code Snippet Description Generation }, author={ Yusuf Sulistyo Nugroho and Farah Danisha Salam and Brittany Reid and Raula Gaikovina Kula and Kazumasa Shimari and Kenichi Matsumoto }, journal={arXiv preprint arXiv:2506.15453}, year={ 2025 } }