70
0

APEX: Empowering LLMs with Physics-Based Task Planning for Real-time Insight

Main:9 Pages
11 Figures
Bibliography:3 Pages
9 Tables
Appendix:18 Pages
Abstract

Large Language Models (LLMs) demonstrate strong reasoning and task planning capabilities but remain fundamentally limited in physical interaction modeling. Existing approaches integrate perception via Vision-Language Models (VLMs) or adaptive decision-making through Reinforcement Learning (RL), but they fail to capture dynamic object interactions or require task-specific training, limiting their real-world applicability. We introduce APEX (Anticipatory Physics-Enhanced Execution), a framework that equips LLMs with physics-driven foresight for real-time task planning. APEX constructs structured graphs to identify and model the most relevant dynamic interactions in the environment, providing LLMs with explicit physical state updates. Simultaneously, APEX provides low-latency forward simulations of physically feasible actions, allowing LLMs to select optimal strategies based on predictive outcomes rather than static observations. We evaluate APEX on three benchmarks designed to assess perception, prediction, and decision-making: (1) Physics Reasoning Benchmark, testing causal inference and object motion prediction; (2) Tetris, evaluating whether physics-informed prediction enhances decision-making performance in long-horizon planning tasks; (3) Dynamic Obstacle Avoidance, assessing the immediate integration of perception and action feasibility analysis. APEX significantly outperforms standard LLMs and VLM-based models, demonstrating the necessity of explicit physics reasoning for bridging the gap between language-based intelligence and real-world task execution. The source code and experiment setup are publicly available atthis https URL.

View on arXiv
@article{huang2025_2505.13921,
  title={ APEX: Empowering LLMs with Physics-Based Task Planning for Real-time Insight },
  author={ Wanjing Huang and Weixiang Yan and Zhen Zhang and Ambuj Singh },
  journal={arXiv preprint arXiv:2505.13921},
  year={ 2025 }
}
Comments on this paper