ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.07419
24
7

Learning Sparse Neural Networks via ℓ0\ell_0ℓ0​ and Tℓ1\ell_1ℓ1​ by a Relaxed Variable Splitting Method with Application to Multi-scale Curve Classification

20 February 2019
Fanghui Xue
Jack Xin
ArXivPDFHTML
Abstract

We study sparsification of convolutional neural networks (CNN) by a relaxed variable splitting method of ℓ0\ell_0ℓ0​ and transformed-ℓ1\ell_1ℓ1​ (Tℓ1\ell_1ℓ1​) penalties, with application to complex curves such as texts written in different fonts, and words written with trembling hands simulating those of Parkinson's disease patients. The CNN contains 3 convolutional layers, each followed by a maximum pooling, and finally a fully connected layer which contains the largest number of network weights. With ℓ0\ell_0ℓ0​ penalty, we achieved over 99 \% test accuracy in distinguishing shaky vs. regular fonts or hand writings with above 86 \% of the weights in the fully connected layer being zero. Comparable sparsity and test accuracy are also reached with a proper choice of Tℓ1\ell_1ℓ1​ penalty.

View on arXiv
Comments on this paper