41
2
v1v2 (latest)

Autonomous Microscopy Experiments through Large Language Model Agents

Main:53 Pages
18 Figures
5 Tables
Abstract

Large language models (LLMs) are revolutionizing self driving laboratories (SDLs) for materials research, promising unprecedented acceleration of scientific discovery. However, current SDL implementations rely on rigid protocols that fail to capture the adaptability and intuition of expert scientists in dynamic experimental settings. We introduce Artificially Intelligent Lab Assistant (AILA), a framework automating atomic force microscopy through LLM driven agents. Further, we develop AFMBench a comprehensive evaluation suite challenging AI agents across the complete scientific workflow from experimental design to results analysis. We find that state of the art models struggle with basic tasks and coordination scenarios. Notably, Claude 3.5 sonnet performs unexpectedly poorly despite excelling in materials domain question answering (QA) benchmarks, revealing that domain specific QA proficiency does not necessarily translate to effective agentic capabilities. Additionally, we observe that LLMs can deviate from instructions, raising safety alignment concerns for SDL applications. Our ablations reveal that multi agent frameworks outperform single-agent architectures. We also observe significant prompt fragility, where slight modifications in prompt structure cause substantial performance variations in capable models like GPT 4o. Finally, we evaluate AILA's effectiveness in increasingly advanced experiments AFM calibration, feature detection, mechanical property measurement, graphene layer counting, and indenter detection. Our findings underscore the necessity for rigorous benchmarking protocols and prompt engineering strategies before deploying AI laboratory assistants in scientific research environments.

View on arXiv
@article{mandal2025_2501.10385,
  title={ Autonomous Microscopy Experiments through Large Language Model Agents },
  author={ Indrajeet Mandal and Jitendra Soni and Mohd Zaki and Morten M. Smedskjaer and Katrin Wondraczek and Lothar Wondraczek and Nitya Nand Gosvami and N. M. Anoop Krishnan },
  journal={arXiv preprint arXiv:2501.10385},
  year={ 2025 }
}
Comments on this paper