Deep Learning Approaches on Image Captioning: A Review
- VLM
Image captioning is a challenging research area that aims to generate natural language descriptions for visual content. The advent of deep learning and more recently vision-language pre-training techniques has revolutionized the field, leading to more sophisticated methods and improved performance. This survey paper provides a structured review of deep learning methods in image captioning by providing a comprehensive taxonomy and discussing each method category in detail. We also discuss the widely-used datasets and evaluation metrics created to assess the performance of image captioning models. We highlight the challenges faced in the field, such as the object hallucination problem, missing context, illumination conditions, contextual understanding, and referring expressions. We rank various deep learning methods in terms of their performance according to established evaluation metrics. In addition to identifying the current state of the art, we suggest potential future directions for research in this area, such as mitigating the information misalignment problem between the image and text modalities, overcoming the dataset bias, incorporating vision-language pre-training methods for caption generation, and developing better evaluation tools to measure the quality of image captions.
View on arXiv