Recent advances in AI -- including generative approaches -- have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals. The responsible use of AI increasingly shows the need for human-AI teaming, necessitating effective interaction between humans and machines. A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise. In cognitive science, human generalisation commonly involves abstraction and concept learning. In contrast, AI generalisation encompasses out-of-domain generalisation in machine learning, rule-based reasoning in symbolic AI, and abstraction in neuro-symbolic AI. In this perspective paper, we combine insights from AI and cognitive science to identify key commonalities and differences across three dimensions: notions of generalisation, methods for generalisation, and evaluation of generalisation. We map the different conceptualisations of generalisation in AI and cognitive science along these three dimensions and consider their role in human-AI teaming. This results in interdisciplinary challenges across AI and cognitive science that must be tackled to provide a foundation for effective and cognitively supported alignment in human-AI teaming scenarios.
View on arXiv@article{ilievski2025_2411.15626, title={ Aligning Generalisation Between Humans and Machines }, author={ Filip Ilievski and Barbara Hammer and Frank van Harmelen and Benjamin Paassen and Sascha Saralajew and Ute Schmid and Michael Biehl and Marianna Bolognesi and Xin Luna Dong and Kiril Gashteovski and Pascal Hitzler and Giuseppe Marra and Pasquale Minervini and Martin Mundt and Axel-Cyrille Ngonga Ngomo and Alessandro Oltramari and Gabriella Pasi and Zeynep G. Saribatur and Luciano Serafini and John Shawe-Taylor and Vered Shwartz and Gabriella Skitalinskaya and Clemens Stachl and Gido M. van de Ven and Thomas Villmann }, journal={arXiv preprint arXiv:2411.15626}, year={ 2025 } }