ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.01545
19
7
v1v2 (latest)

A Generalized Scalarization Method for Evolutionary Multi-objective Optimization

3 December 2022
Ruihao Zheng
Zhenkun Wang
ArXiv (abs)PDFHTML
Abstract

The decomposition-based multi-objective evolutionary algorithm (MOEA/D) transforms a multi-objective optimization problem (MOP) into a set of single-objective subproblems for collaborative optimization. Mismatches between subproblems and solutions can lead to severe performance degradation of MOEA/D. Most existing mismatch coping strategies only work when the L∞L_{\infty}L∞​ scalarization is used. A mismatch coping strategy that can use any LpL_{p}Lp​ scalarization, even when facing MOPs with non-convex Pareto fronts, is of great significance for MOEA/D. This paper uses the global replacement (GR) as the backbone. We analyze how GR can no longer avoid mismatches when L∞L_{\infty}L∞​ is replaced by another LpL_{p}Lp​ with p∈[1,∞)p\in [1,\infty)p∈[1,∞), and find that the LpL_pLp​-based (1≤p<∞1\leq p<\infty1≤p<∞) subproblems having inconsistently large preference regions. When ppp is set to a small value, some middle subproblems have very small preference regions so that their direction vectors cannot pass through their corresponding preference regions. Therefore, we propose a generalized LpL_pLp​ (GLpL_pLp​) scalarization to ensure that the subproblem's direction vector passes through its preference region. Our theoretical analysis shows that GR can always avoid mismatches when using the GLpL_pLp​ scalarization for any p≥1p\geq 1p≥1. The experimental studies on various MOPs conform to the theoretical analysis.

View on arXiv
Comments on this paper