ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13405
12
0

A Dataless Reinforcement Learning Approach to Rounding Hyperplane Optimization for Max-Cut

19 May 2025
Gabriel Malikal
Ismail Alkhouri
Alvaro Velasquez
Adam M Alessio
S. Ravishankar
ArXivPDFHTML
Abstract

The Maximum Cut (MaxCut) problem is NP-Complete, and obtaining its optimal solution is NP-hard in the worst case. As a result, heuristic-based algorithms are commonly used, though their design often requires significant domain expertise. More recently, learning-based methods trained on large (un)labeled datasets have been proposed; however, these approaches often struggle with generalizability and scalability. A well-known approximation algorithm for MaxCut is the Goemans-Williamson (GW) algorithm, which relaxes the Quadratic Unconstrained Binary Optimization (QUBO) formulation into a semidefinite program (SDP). The GW algorithm then applies hyperplane rounding by uniformly sampling a random hyperplane to convert the SDP solution into binary node assignments. In this paper, we propose a training-data-free approach based on a non-episodic reinforcement learning formulation, in which an agent learns to select improved rounding hyperplanes that yield better cuts than those produced by the GW algorithm. By optimizing over a Markov Decision Process (MDP), our method consistently achieves better cuts across large-scale graphs with varying densities and degree distributions.

View on arXiv
@article{malikal2025_2505.13405,
  title={ A Dataless Reinforcement Learning Approach to Rounding Hyperplane Optimization for Max-Cut },
  author={ Gabriel Malikal and Ismail Alkhouri and Alvaro Velasquez and Adam M Alessio and Saiprasad Ravishankar },
  journal={arXiv preprint arXiv:2505.13405},
  year={ 2025 }
}
Comments on this paper