ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.17737
44
3
v1v2 (latest)

Sparser, Better, Faster, Stronger: Sparsity Detection for Efficient Automatic Differentiation

29 January 2025
Adrian Hill
Guillaume Dalle
ArXiv (abs)PDFHTML
Abstract

From implicit differentiation to probabilistic modeling, Jacobian and Hessian matrices have many potential use cases in Machine Learning (ML), but they are viewed as computationally prohibitive. Fortunately, these matrices often exhibit sparsity, which can be leveraged to speed up the process of Automatic Differentiation (AD). This paper presents advances in sparsity detection, previously the performance bottleneck of Automatic Sparse Differentiation (ASD). Our implementation of sparsity detection is based on operator overloading, able to detect both local and global sparsity patterns, and supports flexible index set representations. It is fully automatic and requires no modification of user code, making it compatible with existing ML codebases. Most importantly, it is highly performant, unlocking Jacobians and Hessians at scales where they were considered too expensive to compute. On real-world problems from scientific ML, graph neural networks and optimization, we show significant speed-ups of up to three orders of magnitude. Notably, using our sparsity detection system, ASD outperforms standard AD for one-off computations, without amortization of either sparsity detection or matrix coloring.

View on arXiv
@article{hill2025_2501.17737,
  title={ Sparser, Better, Faster, Stronger: Sparsity Detection for Efficient Automatic Differentiation },
  author={ Adrian Hill and Guillaume Dalle },
  journal={arXiv preprint arXiv:2501.17737},
  year={ 2025 }
}
Comments on this paper