129
23

Concadia: Tackling Image Accessibility with Descriptive Texts and Context

Abstract

Images have become an integral part of online media. This has enhanced the dissemination of knowledge, but it poses serious accessibility challenges. The HTML "alt" field is hidden by default and designated for supplying a description that could replace the image, but it is rarely used. By contrast, image captions appear alongside the image and are more abundant, but they are written to supply additional information and generally lack the details required for accessibility. These terms are often treated as synonyms, but we argue that a distinction is essential. To address this, we introduce the publicly available Wikipedia-based corpus Concadia, which consists of 96,918 images with corresponding English-language descriptions, captions, and surrounding context. We use Concadia to characterize the commonalities and differences between descriptions and captions. This leads us to the hypothesis that captions, while not substitutes for descriptions, can provide a useful signal for creating effective descriptions. We substantiate this hypothesis by showing that image description systems trained on Concadia benefit from having caption embeddings as part of their inputs. Finally, we provide evidence from a human-subjects experiment that human-created captions and descriptions have distinct communicative purposes, and that our generated texts follow this same pattern. These experiments begin to show how Concadia can be a powerful tool in addressing the underlying accessibility issues posed by image data.

View on arXiv
Comments on this paper