Standardized surveys scale efficiently but sacrifice depth, while conversational interviews improve response quality at the cost of scalability and consistency. This study bridges the gap between these methods by introducing a framework for AI-assisted conversational interviewing. To evaluate this framework, we conducted a web survey experiment where 1,800 participants were randomly assigned to text-based conversational AI agents, or "textbots", to dynamically probe respondents for elaboration and interactively code open-ended responses. We assessed textbot performance in terms of coding accuracy, response quality, and respondent experience. Our findings reveal that textbots perform moderately well in live coding even without survey-specific fine-tuning, despite slightly inflated false positive errors due to respondent acquiescence bias. Open-ended responses were more detailed and informative, but this came at a slight cost to respondent experience. Our findings highlight the feasibility of using AI methods to enhance open-ended data collection in web surveys.
View on arXiv@article{barari2025_2504.13908, title={ AI-Assisted Conversational Interviewing: Effects on Data Quality and User Experience }, author={ Soubhik Barari and Jarret Angbazo and Natalie Wang and Leah M. Christian and Elizabeth Dean and Zoe Slowinski and Brandon Sepulvado }, journal={arXiv preprint arXiv:2504.13908}, year={ 2025 } }