ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.14348
14
5

Generalized Quantile Loss for Deep Neural Networks

28 December 2020
Dvir Ben-Or
Michael Kolomenkin
G. Shabat
    UQCV
ArXivPDFHTML
Abstract

This note presents a simple way to add a count (or quantile) constraint to a regression neural net, such that given nnn samples in the training set it guarantees that the prediction of m<nm<nm<n samples will be larger than the actual value (the label). Unlike standard quantile regression networks, the presented method can be applied to any loss function and not necessarily to the standard quantile regression loss, which minimizes the mean absolute differences. Since this count constraint has zero gradients almost everywhere, it cannot be optimized using standard gradient descent methods. To overcome this problem, an alternation scheme, which is based on standard neural network optimization procedures, is presented with some theoretical analysis.

View on arXiv
Comments on this paper