ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.16312
146
0

EquivPruner: Boosting Efficiency and Quality in LLM-Based Search via Action Pruning

22 May 2025
Jiawei Liu
Qisi Chen
Jianshu Zhang
Quan Liu
Defu Lian
    LLMAG
ArXiv (abs)PDFHTML
Main:7 Pages
4 Figures
Bibliography:2 Pages
5 Tables
Appendix:2 Pages
Abstract

Large Language Models (LLMs) excel at complex reasoning through search algorithms, yet current strategies often suffer from massive token consumption due to redundant exploration of semantically equivalent steps. Existing semantic similarity methods struggle to accurately identify such equivalence in domain-specific contexts like mathematical reasoning. To address this, we propose EquivPruner, a simple yet effective approach that identifies and prunes semantically equivalent actions during LLM reasoning search. We also introduce MathEquiv, the first dataset we created for mathematical statement equivalence, which enables the training of a lightweight equivalence detector. Extensive experiments across various models and tasks demonstrate that EquivPruner significantly reduces token consumption, improving searching efficiency and often bolstering reasoning accuracy. For instance, when applied to Qwen2.5-Math-7B-Instruct on GSM8K, EquivPruner reduced token consumption by 48.1\% while also improving accuracy. Our code is available atthis https URL.

View on arXiv
@article{liu2025_2505.16312,
  title={ EquivPruner: Boosting Efficiency and Quality in LLM-Based Search via Action Pruning },
  author={ Jiawei Liu and Qisi Chen and Jianshu Zhang and Quan Liu and Defu Lian },
  journal={arXiv preprint arXiv:2505.16312},
  year={ 2025 }
}
Comments on this paper