ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.04710
37
0

Untangling Lariats: Subgradient Following of Variationally Penalized Objectives

7 May 2024
Kai-Chia Mo
Shai Shalev-Shwartz
Nisael Shártov
ArXivPDFHTML
Abstract

We describe an apparatus for subgradient-following of the optimum of convex problems with variational penalties. In this setting, we receive a sequence yi,…,yny_i,\ldots,y_nyi​,…,yn​ and seek a smooth sequence x1,…,xnx_1,\ldots,x_nx1​,…,xn​. The smooth sequence needs to attain the minimum Bregman divergence to an input sequence with additive variational penalties in the general form of ∑igi(xi+1−xi)\sum_i{}g_i(x_{i+1}-x_i)∑i​gi​(xi+1​−xi​). We derive known algorithms such as the fused lasso and isotonic regression as special cases of our approach. Our approach also facilitates new variational penalties such as non-smooth barrier functions.We then derive a novel lattice-based procedure for subgradient following of variational penalties characterized through the output of arbitrary convolutional filters. This paradigm yields efficient solvers for high-order filtering problems of temporal sequences in which sparse discrete derivatives such as acceleration and jerk are desirable. We also introduce and analyze new multivariate problems in which xi,yi∈Rd\mathbf{x}_i,\mathbf{y}_i\in\mathbb{R}^dxi​,yi​∈Rd with variational penalties that depend on ∥xi+1−xi∥\|\mathbf{x}_{i+1}-\mathbf{x}_i\|∥xi+1​−xi​∥. The norms we consider are ℓ2\ell_2ℓ2​ and ℓ∞\ell_\inftyℓ∞​ which promote group sparsity.

View on arXiv
@article{mo2025_2405.04710,
  title={ Untangling Lariats: Subgradient Following of Variationally Penalized Objectives },
  author={ Kai-Chia Mo and Shai Shalev-Shwartz and Nisæl Shártov },
  journal={arXiv preprint arXiv:2405.04710},
  year={ 2025 }
}
Comments on this paper