ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.04139
53
0

Robust Computer-Vision based Construction Site Detection for Assistive-Technology Applications

6 March 2025
Junchi Feng
Giles Hamilton-Fletcher
Nikhil Ballem
Michael Batavia
Yifei Wang
Jiuling Zhong
Maurizio Porfiri
John-Ross Rizzo
ArXivPDFHTML
Abstract

Navigating urban environments poses significant challenges for people with disabilities, particularly those with blindness and low vision. Environments with dynamic and unpredictable elements like construction sites are especially challenging. Construction sites introduce hazards like uneven surfaces, obstructive barriers, hazardous materials, and excessive noise, and they can alter routing, complicating safe mobility. Existing assistive technologies are limited, as navigation apps do not account for construction sites during trip planning, and detection tools that attempt hazard recognition struggle to address the extreme variability of construction paraphernalia. This study introduces a novel computer vision-based system that integrates open-vocabulary object detection, a YOLO-based scaffolding-pole detection model, and an optical character recognition (OCR) module to comprehensively identify and interpret construction site elements for assistive navigation. In static testing across seven construction sites, the system achieved an overall accuracy of 88.56\%, reliably detecting objects from 2m to 10m within a 0∘^\circ∘ -- 75∘^\circ∘ angular offset. At closer distances (2--4m), the detection rate was 100\% at all tested angles. At

View on arXiv
@article{feng2025_2503.04139,
  title={ Robust Computer-Vision based Construction Site Detection for Assistive-Technology Applications },
  author={ Junchi Feng and Giles Hamilton-Fletcher and Nikhil Ballem and Michael Batavia and Yifei Wang and Jiuling Zhong and Maurizio Porfiri and John-Ross Rizzo },
  journal={arXiv preprint arXiv:2503.04139},
  year={ 2025 }
}
Comments on this paper