ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.08592
14
5

Kernel Mode Decomposition and programmable/interpretable regression networks

19 July 2019
H. Owhadi
C. Scovel
G. Yoo
ArXivPDFHTML
Abstract

Mode decomposition is a prototypical pattern recognition problem that can be addressed from the (a priori distinct) perspectives of numerical approximation, statistical inference and deep learning. Could its analysis through these combined perspectives be used as a Rosetta stone for deciphering mechanisms at play in deep learning? Motivated by this question we introduce programmable and interpretable regression networks for pattern recognition and address mode decomposition as a prototypical problem. The programming of these networks is achieved by assembling elementary modules decomposing and recomposing kernels and data. These elementary steps are repeated across levels of abstraction and interpreted from the equivalent perspectives of optimal recovery, game theory and Gaussian process regression (GPR). The prototypical mode/kernel decomposition module produces an optimal approximation (w1,w2,⋯ ,wm)(w_1,w_2,\cdots,w_m)(w1​,w2​,⋯,wm​) of an element (v1,v2,…,vm)(v_1,v_2,\ldots,v_m)(v1​,v2​,…,vm​) of a product of Hilbert subspaces of a common Hilbert space from the observation of the sum v:=v1+⋯+vmv:=v_1+\cdots+v_mv:=v1​+⋯+vm​. The prototypical mode/kernel recomposition module performs partial sums of the recovered modes wiw_iwi​ based on the alignment between each recovered mode wiw_iwi​ and the data vvv. We illustrate the proposed framework by programming regression networks approximating the modes vi=ai(t)yi(θi(t))v_i= a_i(t)y_i\big(\theta_i(t)\big)vi​=ai​(t)yi​(θi​(t)) of a (possibly noisy) signal ∑ivi\sum_i v_i∑i​vi​ when the amplitudes aia_iai​, instantaneous phases θi\theta_iθi​ and periodic waveforms yiy_iyi​ may all be unknown and show near machine precision recovery under regularity and separation assumptions on the instantaneous amplitudes aia_iai​ and frequencies θ˙i\dot{\theta}_iθ˙i​. The structure of some of these networks share intriguing similarities with convolutional neural networks while being interpretable, programmable and amenable to theoretical analysis.

View on arXiv
Comments on this paper