ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.08135
16
1

Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences

16 July 2021
Ikko Yamane
Junya Honda
Florian Yger
Masashi Sugiyama
    SSL
    FedML
    OOD
ArXivPDFHTML
Abstract

Ordinary supervised learning is useful when we have paired training data of input XXX and output YYY. However, such paired data can be difficult to collect in practice. In this paper, we consider the task of predicting YYY from XXX when we have no paired data of them, but we have two separate, independent datasets of XXX and YYY each observed with some mediating variable UUU, that is, we have two datasets SX={(Xi,Ui)}S_X = \{(X_i, U_i)\}SX​={(Xi​,Ui​)} and SY={(Uj′,Yj′)}S_Y = \{(U'_j, Y'_j)\}SY​={(Uj′​,Yj′​)}. A naive approach is to predict UUU from XXX using SXS_XSX​ and then YYY from UUU using SYS_YSY​, but we show that this is not statistically consistent. Moreover, predicting UUU can be more difficult than predicting YYY in practice, e.g., when UUU has higher dimensionality. To circumvent the difficulty, we propose a new method that avoids predicting UUU but directly learns Y=f(X)Y = f(X)Y=f(X) by training f(X)f(X)f(X) with SXS_{X}SX​ to predict h(U)h(U)h(U) which is trained with SYS_{Y}SY​ to approximate YYY. We prove statistical consistency and error bounds of our method and experimentally confirm its practical usefulness.

View on arXiv
Comments on this paper