ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19896
15
0

Large Language Models as Autonomous Spacecraft Operators in Kerbal Space Program

26 May 2025
Alejandro Carrasco
Victor Rodríguez-Fernández
Richard Linares
    LLMAG
ArXiv (abs)PDFHTML
Main:34 Pages
12 Figures
Bibliography:5 Pages
11 Tables
Abstract

Recent trends are emerging in the use of Large Language Models (LLMs) as autonomous agents that take actions based on the content of the user text prompts. We intend to apply these concepts to the field of Control in space, enabling LLMs to play a significant role in the decision-making process for autonomous satellite operations. As a first step towards this goal, we have developed a pure LLM-based solution for the Kerbal Space Program Differential Games (KSPDG) challenge, a public software design competition where participants create autonomous agents for maneuvering satellites involved in non-cooperative space operations, running on the KSP game engine. Our approach leverages prompt engineering, few-shot prompting, and fine-tuning techniques to create an effective LLM-based agent that ranked 2nd in the competition. To the best of our knowledge, this work pioneers the integration of LLM agents into space research. The project comprises several open repositories to facilitate replication and further research. The codebase is accessible on \href{this https URL}{GitHub}, while the trained models and datasets are available on \href{this https URL}{Hugging Face}. Additionally, experiment tracking and detailed results can be reviewed on \href{this https URL}{Weights \& Biases

View on arXiv
@article{carrasco2025_2505.19896,
  title={ Large Language Models as Autonomous Spacecraft Operators in Kerbal Space Program },
  author={ Alejandro Carrasco and Victor Rodriguez-Fernandez and Richard Linares },
  journal={arXiv preprint arXiv:2505.19896},
  year={ 2025 }
}
Comments on this paper