Zero-Shot Visual Slot Filling as Question Answering
This paper presents a new approach to slot filling by reformulating the slot filling task as Question Answering, and replacing slot tags with rich natural language questions that capture the semantics of visual information and lexical text often displayed on device screens. These questions are paired with the user's utterance, and slots are extracted from the utterance using a state-of-the-art Transformer-based deep learning Question Answering system. An approach to further refine the model with multi-task training is presented. The multi-task approach facilitates the incorporation of a large number of successive refinements and transfer learning across tasks. New visual slot datasets and a visual extension of the popular ATIS dataset are introduced to support research and experimentation on visual slot filling. Results show the new approach not only maintains robust accuracy for sparse training conditions but achieves state-of-the-art F1 of 0.97 on ATIS with approximately 1/10th the training data.
View on arXiv