12
0

Deep greedy unfolding: Sorting out argsorting in greedy sparse recovery algorithms

Abstract

Gradient-based learning imposes (deep) neural networks to be differentiable at all steps. This includes model-based architectures constructed by unrolling iterations of an iterative algorithm onto layers of a neural network, known as algorithm unrolling. However, greedy sparse recovery algorithms depend on the non-differentiable argsort operator, which hinders their integration into neural networks. In this paper, we address this challenge in Orthogonal Matching Pursuit (OMP) and Iterative Hard Thresholding (IHT), two popular representative algorithms in this class. We propose permutation-based variants of these algorithms and approximate permutation matrices using "soft" permutation matrices derived from softsort, a continuous relaxation of argsort. We demonstrate--both theoretically and numerically--that Soft-OMP and Soft-IHT, as differentiable counterparts of OMP and IHT and fully compatible with neural network training, effectively approximate these algorithms with a controllable degree of accuracy. This leads to the development of OMP- and IHT-Net, fully trainable network architectures based on Soft-OMP and Soft-IHT, respectively. Finally, by choosing weights as "structure-aware" trainable parameters, we connect our approach to structured sparse recovery and demonstrate its ability to extract latent sparsity patterns from data.

View on arXiv
@article{mohammad-taheri2025_2505.15661,
  title={ Deep greedy unfolding: Sorting out argsorting in greedy sparse recovery algorithms },
  author={ Sina Mohammad-Taheri and Matthew J. Colbrook and Simone Brugiapaglia },
  journal={arXiv preprint arXiv:2505.15661},
  year={ 2025 }
}
Comments on this paper