ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.15262
27
0

Dynamic Memory Based Adaptive Optimization

23 February 2024
Balázs Szegedy
Domonkos Czifra
Péter Korösi-Szabó
    ODL
ArXivPDFHTML
Abstract

Define an optimizer as having memory kkk if it stores kkk dynamically changing vectors in the parameter space. Classical SGD has memory 000, momentum SGD optimizer has 111 and Adam optimizer has 222. We address the following questions: How can optimizers make use of more memory units? What information should be stored in them? How to use them for the learning steps? As an approach to the last question, we introduce a general method called "Retrospective Learning Law Correction" or shortly RLLC. This method is designed to calculate a dynamically varying linear combination (called learning law) of memory units, which themselves may evolve arbitrarily. We demonstrate RLLC on optimizers whose memory units have linear update rules and small memory (≤4\leq 4≤4 memory units). Our experiments show that in a variety of standard problems, these optimizers outperform the above mentioned three classical optimizers. We conclude that RLLC is a promising framework for boosting the performance of known optimizers by adding more memory units and by making them more adaptive.

View on arXiv
Comments on this paper