41
0

Auto-Patching: Enhancing Multi-Hop Reasoning in Language Models

Main:6 Pages
4 Figures
Bibliography:2 Pages
2 Tables
Abstract

Multi-hop questions still stump large language models (LLMs), which struggle to link information across multiple reasoning steps. We introduce Auto-Patch, a novel method that dynamically patches hidden states during inference to enhance multi-hop reasoning in LLMs. Building on the PatchScopes framework, Auto-Patch selectively modifies internal representations using a learned classifier. Evaluated on the MuSiQue dataset, Auto-Patch improves the solve rate from 18.45\% (baseline) to 23.63~±\pm~0.7\% (3 runs), narrowing the gap to Chain-of-Thought prompting (27.44\%). Our results highlight the potential of dynamic hidden state interventions for advancing complex reasoning in LLMs.

View on arXiv
@article{jan2025_2506.00483,
  title={ Auto-Patching: Enhancing Multi-Hop Reasoning in Language Models },
  author={ Aviv Jan and Dean Tahory and Omer Talmi and Omar Abo Mokh },
  journal={arXiv preprint arXiv:2506.00483},
  year={ 2025 }
}
Comments on this paper