ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.06325
31
0

Human in the Latent Loop (HILL): Interactively Guiding Model Training Through Human Intuition

9 May 2025
Daniel Geissler
Lars Krupp
Vishal Banwari
David Habusch
Bo Zhou
Paul Lukowicz
Jakob Karolus
ArXivPDFHTML
Abstract

Latent space representations are critical for understanding and improving the behavior of machine learning models, yet they often remain obscure and intricate. Understanding and exploring the latent space has the potential to contribute valuable human intuition and expertise about respective domains. In this work, we present HILL, an interactive framework allowing users to incorporate human intuition into the model training by interactively reshaping latent space representations. The modifications are infused into the model training loop via a novel approach inspired by knowledge distillation, treating the user's modifications as a teacher to guide the model in reshaping its intrinsic latent representation. The process allows the model to converge more effectively and overcome inefficiencies, as well as provide beneficial insights to the user. We evaluated HILL in a user study tasking participants to train an optimal model, closely observing the employed strategies. The results demonstrated that human-guided latent space modifications enhance model performance while maintaining generalization, yet also revealing the risks of including user biases. Our work introduces a novel human-AI interaction paradigm that infuses human intuition into model training and critically examines the impact of human intervention on training strategies and potential biases.

View on arXiv
@article{geissler2025_2505.06325,
  title={ Human in the Latent Loop (HILL): Interactively Guiding Model Training Through Human Intuition },
  author={ Daniel Geissler and Lars Krupp and Vishal Banwari and David Habusch and Bo Zhou and Paul Lukowicz and Jakob Karolus },
  journal={arXiv preprint arXiv:2505.06325},
  year={ 2025 }
}
Comments on this paper