ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.09796
11
1

A Novel Technique for Evidence based Conditional Inference in Deep Neural Networks via Latent Feature Perturbation

24 November 2018
Dinesh Khandelwal
Suyash Agrawal
Parag Singla
Chetan Arora
ArXivPDFHTML
Abstract

Auxiliary information can be exploited in machine learning models using the paradigm of evidence based conditional inference. Multi-modal techniques in Deep Neural Networks (DNNs) can be seen as perturbing the latent feature representation for incorporating evidence from the auxiliary modality. However, they require training a specialized network which can map sparse evidence to a high dimensional latent space vector. Designing such a network, as well as collecting jointly labeled data for training is a non-trivial task. In this paper, we present a novel multi-task learning (MTL) based framework to perform evidence based conditional inference in DNNs which can overcome both these shortcomings. Our framework incorporates evidence as the output of secondary task(s), while modeling the original problem as the primary task of interest. During inference, we employ a novel Bayesian formulation to change the joint latent feature representation so as to maximize the probability of the observed evidence. Since our approach models evidence as prediction from a DNN, this can often be achieved using standard pre-trained backbones for popular tasks, eliminating the need for training altogether. Even when training is required, our MTL architecture ensures the same can be done without any need for jointly labeled data. Exploiting evidence using our framework, we show an improvement of 3.9% over the state-of-the-art, for predicting semantic segmentation given the image tags, and 2.8% for predicting instance segmentation given image captions.

View on arXiv
Comments on this paper