ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.03112
44
3

Multitask Learning via Shared Features: Algorithms and Hardness

7 September 2022
Konstantina Bairaktari
Guy Blanc
Li-Yang Tan
Jonathan R. Ullman
Lydia Zakynthinou
ArXiv (abs)PDFHTML
Abstract

We investigate the computational efficiency of multitask learning of Boolean functions over the ddd-dimensional hypercube, that are related by means of a feature representation of size k≪dk \ll dk≪d shared across all tasks. We present a polynomial time multitask learning algorithm for the concept class of halfspaces with margin γ\gammaγ, which is based on a simultaneous boosting technique and requires only poly(k/γ)\textrm{poly}(k/\gamma)poly(k/γ) samples-per-task and poly(klog⁡(d)/γ)\textrm{poly}(k\log(d)/\gamma)poly(klog(d)/γ) samples in total. In addition, we prove a computational separation, showing that assuming there exists a concept class that cannot be learned in the attribute-efficient model, we can construct another concept class such that can be learned in the attribute-efficient model, but cannot be multitask learned efficiently -- multitask learning this concept class either requires super-polynomial time complexity or a much larger total number of samples.

View on arXiv
Comments on this paper