38
0

Improving Chemical Understanding of LLMs via SMILES Parsing

Main:8 Pages
14 Figures
Bibliography:4 Pages
8 Tables
Appendix:4 Pages
Abstract

Large language models (LLMs) are increasingly recognized as powerful tools for scientific discovery, particularly in molecular science. A fundamental requirement for these models is the ability to accurately understand molecular structures, commonly encoded in the SMILES representation. However, current LLMs struggle to interpret SMILES, even failing to carry out basic tasks such as counting molecular rings. To address this limitation, we introduce CLEANMOL, a novel framework that formulates SMILES parsing into a suite of clean and deterministic tasks explicitly designed to promote graph-level molecular comprehension. These tasks span from subgraph matching to global graph matching, providing structured supervision aligned with molecular structural properties. We construct a molecular pretraining dataset with adaptive difficulty scoring and pre-train open-source LLMs on these tasks. Our results show that CLEANMOL not only enhances structural comprehension but also achieves the best or competes with the baseline on the Mol-Instructions benchmark.

View on arXiv
@article{jang2025_2505.16340,
  title={ Improving Chemical Understanding of LLMs via SMILES Parsing },
  author={ Yunhui Jang and Jaehyung Kim and Sungsoo Ahn },
  journal={arXiv preprint arXiv:2505.16340},
  year={ 2025 }
}
Comments on this paper