ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.01170
80
0

Label Distribution Learning with Biased Annotations by Learning Multi-Label Representation

3 February 2025
Zhiqiang Kou
Si Qin
Hailin Wang
Mingkun Xie
Shuo Chen
Yuheng Jia
Tongliang Liu
Masashi Sugiyama
Xin Geng
ArXivPDFHTML
Abstract

Multi-label learning (MLL) has gained attention for its ability to represent real-world data. Label Distribution Learning (LDL), an extension of MLL to learning from label distributions, faces challenges in collecting accurate label distributions. To address the issue of biased annotations, based on the low-rank assumption, existing works recover true distributions from biased observations by exploring the label correlations. However, recent evidence shows that the label distribution tends to be full-rank, and naive apply of low-rank approximation on biased observation leads to inaccurate recovery and performance degradation. In this paper, we address the LDL with biased annotations problem from a novel perspective, where we first degenerate the soft label distribution into a hard multi-hot label and then recover the true label information for each instance. This idea stems from an insight that assigning hard multi-hot labels is often easier than assigning a soft label distribution, and it shows stronger immunity to noise disturbances, leading to smaller label bias. Moreover, assuming that the multi-label space for predicting label distributions is low-rank offers a more reasonable approach to capturing label correlations. Theoretical analysis and experiments confirm the effectiveness and robustness of our method on real-world datasets.

View on arXiv
@article{kou2025_2502.01170,
  title={ Label Distribution Learning with Biased Annotations by Learning Multi-Label Representation },
  author={ Zhiqiang Kou and Si Qin and Hailin Wang and Mingkun Xie and Shuo Chen and Yuheng Jia and Tongliang Liu and Masashi Sugiyama and Xin Geng },
  journal={arXiv preprint arXiv:2502.01170},
  year={ 2025 }
}
Comments on this paper