ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.06569
13
0

Textile Analysis for Recycling Automation using Transfer Learning and Zero-Shot Foundation Models

6 June 2025
Yannis Spyridis
Vasileios Argyriou
ArXiv (abs)PDFHTML
Main:6 Pages
4 Figures
Bibliography:1 Pages
Abstract

Automated sorting is crucial for improving the efficiency and scalability of textile recycling, but accurately identifying material composition and detecting contaminants from sensor data remains challenging. This paper investigates the use of standard RGB imagery, a cost-effective sensing modality, for key pre-processing tasks in an automated system. We present computer vision components designed for a conveyor belt setup to perform (a) classification of four common textile types and (b) segmentation of non-textile features such as buttons and zippers. For classification, several pre-trained architectures were evaluated using transfer learning and cross-validation, with EfficientNetB0 achieving the best performance on a held-out test set with 81.25\% accuracy. For feature segmentation, a zero-shot approach combining the Grounding DINO open-vocabulary detector with the Segment Anything Model (SAM) was employed, demonstrating excellent performance with a mIoU of 0.90 for the generated masks against ground truth. This study demonstrates the feasibility of using RGB images coupled with modern deep learning techniques, including transfer learning for classification and foundation models for zero-shot segmentation, to enable essential analysis steps for automated textile recycling pipelines.

View on arXiv
@article{spyridis2025_2506.06569,
  title={ Textile Analysis for Recycling Automation using Transfer Learning and Zero-Shot Foundation Models },
  author={ Yannis Spyridis and Vasileios Argyriou },
  journal={arXiv preprint arXiv:2506.06569},
  year={ 2025 }
}
Comments on this paper