ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.05465
24
0

BC-ADMM: An Efficient Non-convex Constrained Optimizer with Robotic Applications

7 April 2025
Zherong Pan
Kui Wu
ArXivPDFHTML
Abstract

Non-convex constrained optimizations are ubiquitous in robotic applications such as multi-agent navigation, UAV trajectory optimization, and soft robot simulation. For this problem class, conventional optimizers suffer from small step sizes and slow convergence. We propose BC-ADMM, a variant of Alternating Direction Method of Multiplier (ADMM), that can solve a class of non-convex constrained optimizations with biconvex constraint relaxation. Our algorithm allows larger step sizes by breaking the problem into small-scale sub-problems that can be easily solved in parallel. We show that our method has both theoretical convergence speed guarantees and practical convergence guarantees in the asymptotic sense. Through numerical experiments in a row of four robotic applications, we show that BC-ADMM has faster convergence than conventional gradient descent and Newton's method in terms of wall clock time.

View on arXiv
@article{pan2025_2504.05465,
  title={ BC-ADMM: An Efficient Non-convex Constrained Optimizer with Robotic Applications },
  author={ Zherong Pan and Kui Wu },
  journal={arXiv preprint arXiv:2504.05465},
  year={ 2025 }
}
Comments on this paper