Shuttle Between the Instructions and the Parameters of Large Language Models

The interaction with Large Language Models (LLMs) through instructions has been extensively investigated in the research community. While instructions have been widely used as the guidelines for task solving, this paper further notices that both instructions and parameters are the compression of task data. Therefore, they could be strongly correlated and can be learned to predict one from the other. This paper proposes a novel neural network framework, SHIP (\textbf{Sh}uttle between the \textbf{I}nstructions and the \textbf{P}arameters), to model and learn the mutual mappings between the instructions and the parameters of LLMs. We verify that SHIP can effectively map one of the instructions/parameters to the other by evaluating it on the tasks of instruction deduction and induction. The results show that SHIP performs better than existing baseline methods in terms of deductive capabilities while significantly surpassing them in inductive capabilities. Moreover, SHIP can effectively combine the two mapping processes to perform excellent inductive reasoning. The code and data for this paper are released atthis https URL.
View on arXiv@article{sun2025_2502.02315, title={ Shuttle Between the Instructions and the Parameters of Large Language Models }, author={ Wangtao Sun and Haotian Xu and Huanxuan Liao and Xuanqing Yu and Zhongtao Jiang and Shizhu He and Jun Zhao and Kang Liu }, journal={arXiv preprint arXiv:2502.02315}, year={ 2025 } }