ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.10654
20
0

POGD: Gradient Descent with New Stochastic Rules

15 October 2022
Feihu Han
Sida Xing
S. Khoo
ArXivPDFHTML
Abstract

There introduce Particle Optimized Gradient Descent (POGD), an algorithm based on the gradient descent but integrates the particle swarm optimization (PSO) principle to achieve the iteration. From the experiments, this algorithm has adaptive learning ability. The experiments in this paper mainly focus on the training speed to reach the target value and the ability to prevent the local minimum. The experiments in this paper are achieved by the convolutional neural network (CNN) image classification on the MNIST and cifar-10 datasets.

View on arXiv
Comments on this paper