We describe an apparatus for subgradient-following of the optimum of convex problems with variational penalties. In this setting, we receive a sequence and seek a smooth sequence . The smooth sequence needs to attain the minimum Bregman divergence to an input sequence with additive variational penalties in the general form of . We derive known algorithms such as the fused lasso and isotonic regression as special cases of our approach. Our approach also facilitates new variational penalties such as non-smooth barrier functions.We then derive a novel lattice-based procedure for subgradient following of variational penalties characterized through the output of arbitrary convolutional filters. This paradigm yields efficient solvers for high-order filtering problems of temporal sequences in which sparse discrete derivatives such as acceleration and jerk are desirable. We also introduce and analyze new multivariate problems in which with variational penalties that depend on . The norms we consider are and which promote group sparsity.
View on arXiv@article{mo2025_2405.04710, title={ Untangling Lariats: Subgradient Following of Variationally Penalized Objectives }, author={ Kai-Chia Mo and Shai Shalev-Shwartz and Nisæl Shártov }, journal={arXiv preprint arXiv:2405.04710}, year={ 2025 } }