We explore how a shell that uses an LLM to accept natural language input might be designed differently from the shells of today. As LLMs may produce unintended or unexplainable outputs, we argue that a natural language shell should provide guardrails that empower users to recover from such errors. We concretize some ideas for doing so by designing a new shell called NaSh, identify remaining open problems in this space, and discuss research directions to address them.
View on arXiv@article{gyawali2025_2506.13028, title={ NaSh: Guardrails for an LLM-Powered Natural Language Shell }, author={ Bimal Raj Gyawali and Saikrishna Achalla and Konstantinos Kallas and Sam Kumar }, journal={arXiv preprint arXiv:2506.13028}, year={ 2025 } }