ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.03607
53
0

Analyzing Transformer Models and Knowledge Distillation Approaches for Image Captioning on Edge AI

4 June 2025
Wing Man Casca Kwok
Yip Chiu Tung
Kunal Bhagchandani
    VLM
ArXiv (abs)PDFHTML
Main:6 Pages
2 Figures
5 Tables
Abstract

Edge computing decentralizes processing power to network edge, enabling real-time AI-driven decision-making in IoT applications. In industrial automation such as robotics and rugged edge AI, real-time perception and intelligence are critical for autonomous operations. Deploying transformer-based image captioning models at the edge can enhance machine perception, improve scene understanding for autonomous robots, and aid in industrial inspection.However, these edge or IoT devices are often constrained in computational resources for physical agility, yet they have strict response time requirements. Traditional deep learning models can be too large and computationally demanding for these devices. In this research, we present findings of transformer-based models for image captioning that operate effectively on edge devices. By evaluating resource-effective transformer models and applying knowledge distillation techniques, we demonstrate inference can be accelerated on resource-constrained devices while maintaining model performance using these techniques.

View on arXiv
@article{kwok2025_2506.03607,
  title={ Analyzing Transformer Models and Knowledge Distillation Approaches for Image Captioning on Edge AI },
  author={ Wing Man Casca Kwok and Yip Chiu Tung and Kunal Bhagchandani },
  journal={arXiv preprint arXiv:2506.03607},
  year={ 2025 }
}
Comments on this paper