ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.13648
35
1

Revisiting Online Learning Approach to Inverse Linear Optimization: A Fenchel−-−Young Loss Perspective and Gap-Dependent Regret Analysis

23 January 2025
Shinsaku Sakaue
Han Bao
Taira Tsuchiya
ArXivPDFHTML
Abstract

This paper revisits the online learning approach to inverse linear optimization studied by Bärmann et al. (2017), where the goal is to infer an unknown linear objective function of an agent from sequential observations of the agent's input-output pairs. First, we provide a simple understanding of the online learning approach through its connection to online convex optimization of \emph{Fenchel--Young losses}. As a byproduct, we present an offline guarantee on the \emph{suboptimality loss}, which measures how well predicted objectives explain the agent's choices, without assuming the optimality of the agent's choices. Second, assuming that there is a gap between optimal and suboptimal objective values in the agent's decision problems, we obtain an upper bound independent of the time horizon TTT on the sum of suboptimality and \emph{estimate losses}, where the latter measures the quality of solutions recommended by predicted objectives. Interestingly, our gap-dependent analysis achieves a faster rate than the standard O(T)O(\sqrt{T})O(T​) regret bound by exploiting structures specific to inverse linear optimization, even though neither the loss functions nor their domains enjoy desirable properties, such as strong convexity.

View on arXiv
@article{sakaue2025_2501.13648,
  title={ Revisiting Online Learning Approach to Inverse Linear Optimization: A Fenchel$-$Young Loss Perspective and Gap-Dependent Regret Analysis },
  author={ Shinsaku Sakaue and Han Bao and Taira Tsuchiya },
  journal={arXiv preprint arXiv:2501.13648},
  year={ 2025 }
}
Comments on this paper