ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.12480
23
0

Quantizing Small-Scale State-Space Models for Edge AI

14 June 2025
Leo Zhao
Tristan Torchet
Melika Payvand
Laura Kriener
Filippo Moro
    MQ
ArXiv (abs)PDFHTML
Main:6 Pages
5 Figures
Bibliography:2 Pages
3 Tables
Abstract

State-space models (SSMs) have recently gained attention in deep learning for their ability to efficiently model long-range dependencies, making them promising candidates for edge-AI applications. In this paper, we analyze the effects of quantization on small-scale SSMs with a focus on reducing memory and computational costs while maintaining task performance. Using the S4D architecture, we first investigate post-training quantization (PTQ) and show that the state matrix A and internal state x are particularly sensitive to quantization. Furthermore, we analyze the impact of different quantization techniques applied to the parameters and activations in the S4D architecture. To address the observed performance drop after Post-training Quantization (PTQ), we apply Quantization-aware Training (QAT), significantly improving performance from 40% (PTQ) to 96% on the sequential MNIST benchmark at 8-bit precision. We further demonstrate the potential of QAT in enabling sub-8-bit precisions and evaluate different parameterization schemes for QAT stability. Additionally, we propose a heterogeneous quantization strategy that assigns different precision levels to model components, reducing the overall memory footprint by a factor of 6x without sacrificing performance. Our results provide actionable insights for deploying quantized SSMs in resource-constrained environments.

View on arXiv
@article{zhao2025_2506.12480,
  title={ Quantizing Small-Scale State-Space Models for Edge AI },
  author={ Leo Zhao and Tristan Torchet and Melika Payvand and Laura Kriener and Filippo Moro },
  journal={arXiv preprint arXiv:2506.12480},
  year={ 2025 }
}
Comments on this paper