ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.00717
19
0

Vid2Coach: Transforming How-To Videos into Task Assistants

31 May 2025
Mina Huh
Zihui Xue
Ujjaini Das
Kumar Ashutosh
Kristen Grauman
Amy Pavel
ArXiv (abs)PDFHTML
Main:21 Pages
12 Figures
Bibliography:1 Pages
9 Tables
Appendix:1 Pages
Abstract

People use videos to learn new recipes, exercises, and crafts. Such videos remain difficult for blind and low vision (BLV) people to follow as they rely on visual comparison. Our observations of visual rehabilitation therapists (VRTs) guiding BLV people to follow how-to videos revealed that VRTs provide both proactive and responsive support including detailed descriptions, non-visual workarounds, and progress feedback. We propose Vid2Coach, a system that transforms how-to videos into wearable camera-based assistants that provide accessible instructions and mixed-initiative feedback. From the video, Vid2Coach generates accessible instructions by augmenting narrated instructions with demonstration details and completion criteria for each step. It then uses retrieval-augmented-generation to extract relevant non-visual workarounds from BLV-specific resources. Vid2Coach then monitors user progress with a camera embedded in commercial smart glasses to provide context-aware instructions, proactive feedback, and answers to user questions. BLV participants (N=8) using Vid2Coach completed cooking tasks with 58.5\% fewer errors than when using their typical workflow and wanted to use Vid2Coach in their daily lives. Vid2Coach demonstrates an opportunity for AI visual assistance that strengthens rather than replaces non-visual expertise.

View on arXiv
@article{huh2025_2506.00717,
  title={ Vid2Coach: Transforming How-To Videos into Task Assistants },
  author={ Mina Huh and Zihui Xue and Ujjaini Das and Kumar Ashutosh and Kristen Grauman and Amy Pavel },
  journal={arXiv preprint arXiv:2506.00717},
  year={ 2025 }
}
Comments on this paper