ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.00636
28
21
v1v2v3v4v5 (latest)

Recursive Optimization of Convex Risk Measures: Mean-Semideviation Models

2 April 2018
Dionysios S. Kalogerias
Warrren B Powell
ArXiv (abs)PDFHTML
Abstract

We develop and analyze recursive, data-driven, stochastic subgradient methods for optimizing a new and versatile class of convex risk measures, termed here as mean-semideviations. Their construction relies on on the concept of a risk regularizer, a one-dimensional nonlinear map with certain properties, essentially generalizing the positive part weighting function in the mean-upper-semideviation risk measure. After we formally introduce mean-semideviations, we study their basic properties, and we present a fundamental constructive characterization result, demonstrating their generality. We then introduce and rigorously analyze the MESSAGEp algorithm, an efficient stochastic subgradient procedure for iteratively solving convex mean-semideviation risk-averse problems to optimality. The MESSAGEp algorithm may be derived as an application of the T-SCGD algorithm of (Yang et al., 2018). However, the generic theoretical framework of (Yang et al., 2018) is narrow and structurally restrictive, as far as optimization of mean-semideviations is concerned, including the classical mean-upper-semideviation risk measure. By exploiting problem structure, we propose a substantially weaker set of assumptions, under which we establish pathwise convergence of the MESSAGEp algorithm, under the same strong sense as in (Yang et al., 2018). The new framework reveals a fundamental trade-off between the expansiveness of the random position function and the smoothness of the particular mean-semideviation risk measure under consideration. Further, we explicitly show that the class of mean-semideviation problems supported under our framework is strictly larger than the respective class of problems supported in (Yang et al., 2018). Thus, applicability of compositional stochastic optimization is established for a strictly wider spectrum of mean-semideviation problems, justifying the purpose of this work.

View on arXiv
Comments on this paper