Embodied Web Agents: Bridging Physical-Digital Realms for Integrated Agent Intelligence
- LM&Ro

AI agents today are mostly siloed - they either retrieve and reason over vast amount of digital information and knowledge obtained online; or interact with the physical world through embodied perception, planning and action - but rarely both. This separation limits their ability to solve tasks that require integrated physical and digital intelligence, such as cooking from online recipes, navigating with dynamic map data, or interpreting real-world landmarks using web knowledge. We introduce Embodied Web Agents, a novel paradigm for AI agents that fluidly bridge embodiment and web-scale reasoning. To operationalize this concept, we first develop the Embodied Web Agents task environments, a unified simulation platform that tightly integrates realistic 3D indoor and outdoor environments with functional web interfaces. Building upon this platform, we construct and release the Embodied Web Agents Benchmark, which encompasses a diverse suite of tasks including cooking, navigation, shopping, tourism, and geolocation - all requiring coordinated reasoning across physical and digital realms for systematic assessment of cross-domain intelligence. Experimental results reveal significant performance gaps between state-of-the-art AI systems and human capabilities, establishing both challenges and opportunities at the intersection of embodied cognition and web-scale knowledge access. All datasets, codes and websites are publicly available at our project pagethis https URL.
View on arXiv@article{hong2025_2506.15677, title={ Embodied Web Agents: Bridging Physical-Digital Realms for Integrated Agent Intelligence }, author={ Yining Hong and Rui Sun and Bingxuan Li and Xingcheng Yao and Maxine Wu and Alexander Chien and Da Yin and Ying Nian Wu and Zhecan James Wang and Kai-Wei Chang }, journal={arXiv preprint arXiv:2506.15677}, year={ 2025 } }