28
0

Semi-structured LLM Reasoners Can Be Rigorously Audited

Main:9 Pages
7 Figures
Bibliography:4 Pages
11 Tables
Appendix:5 Pages
Abstract

As Large Language Models (LLMs) become increasingly capable at reasoning, the problem of "faithfulness" persists: LLM "reasoning traces" can contain errors and omissions that are difficult to detect, and may obscure biases in model outputs. To address these limitations, we introduce Semi-Structured Reasoning Models (SSRMs), which internalize a semi-structured Chain-of-Thought (CoT) reasoning format within the model. Our SSRMs generate reasoning traces in a Pythonic syntax. While SSRM traces are not executable, they adopt a restricted, task-specific vocabulary to name distinct reasoning steps, and to mark each step's inputs and outputs. Through extensive evaluation on ten benchmarks, SSRMs demonstrate strong performance and generality: they outperform comparably sized baselines by nearly ten percentage points on in-domain tasks while remaining competitive with specialized models on out-of-domain medical benchmarks. Furthermore, we show that semi-structured reasoning is more amenable to analysis: in particular, they can be automatically audited to identify reasoning flaws. We explore both hand-crafted structured audits, which detect task-specific problematic reasoning patterns, and learned typicality audits, which apply probabilistic models over reasoning patterns, and show that both audits can be used to effectively flag probable reasoning errors.

View on arXiv
@article{leng2025_2505.24217,
  title={ Semi-structured LLM Reasoners Can Be Rigorously Audited },
  author={ Jixuan Leng and Cassandra A. Cohen and Zhixian Zhang and Chenyan Xiong and William W. Cohen },
  journal={arXiv preprint arXiv:2505.24217},
  year={ 2025 }
}
Comments on this paper