In this paper, through a very intuitive vanilla proximal method perspective, we derive accelerated high-order optimization algorithms for minimizing a convex function that has H\"{o}lder continuous derivatives. In this general convex setting, we propose a concise unified acceleration framework (UAF), which reconciles the two different high-order acceleration approaches, one by Nesterov and Baes [29, 3, 33] and one by Monteiro and Svaiter [25]. As result, the UAF unifies the high-order acceleration instances [29, 3, 33, 15, 16, 25, 19, 6, 14] of the two approaches by only two problem-related parameters and two additional parameters for framework design. Furthermore, the UAF (and its analysis) is the first approach to make high-order methods applicable for high-order smoothness conditions with respect to non-Euclidean norms. If the function is further uniformly convex, we propose a general restart scheme for the UAF. The iteration complexities of instances of both the UAF and the restarted UAF match existing lower bounds in most important cases [2, 16]. For practical implementation, we introduce a new and effective heuristic that significantly simplifies the binary search procedure required by the framework. We use experiments to verify the effectiveness of the heuristic and demonstrate clear and consistent advantages of high-order acceleration methods over first-order ones, in terms of run-time complexity. Finally, the UAF is proposed directly in the general composite convex setting, thus show that the existing high-order algorithms [29, 3, 33, 16, 6, 14] can be naturally extended to the general composite convex setting.
View on arXiv