ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.13202
16
38

Kernel Methods are Competitive for Operator Learning

26 April 2023
Pau Batlle
Matthieu Darcy
Bamdad Hosseini
H. Owhadi
ArXivPDFHTML
Abstract

We present a general kernel-based framework for learning operators between Banach spaces along with a priori error analysis and comprehensive numerical comparisons with popular neural net (NN) approaches such as Deep Operator Net (DeepONet) [Lu et al.] and Fourier Neural Operator (FNO) [Li et al.]. We consider the setting where the input/output spaces of target operator G† : U→V\mathcal{G}^\dagger\,:\, \mathcal{U}\to \mathcal{V}G†:U→V are reproducing kernel Hilbert spaces (RKHS), the data comes in the form of partial observations ϕ(ui),φ(vi)\phi(u_i), \varphi(v_i)ϕ(ui​),φ(vi​) of input/output functions vi=G†(ui)v_i=\mathcal{G}^\dagger(u_i)vi​=G†(ui​) (i=1,…,Ni=1,\ldots,Ni=1,…,N), and the measurement operators ϕ : U→Rn\phi\,:\, \mathcal{U}\to \mathbb{R}^nϕ:U→Rn and φ : V→Rm\varphi\,:\, \mathcal{V} \to \mathbb{R}^mφ:V→Rm are linear. Writing ψ : Rn→U\psi\,:\, \mathbb{R}^n \to \mathcal{U}ψ:Rn→U and χ : Rm→V\chi\,:\, \mathbb{R}^m \to \mathcal{V}χ:Rm→V for the optimal recovery maps associated with ϕ\phiϕ and φ\varphiφ, we approximate G†\mathcal{G}^\daggerG† with Gˉ=χ∘fˉ∘ϕ\bar{\mathcal{G}}=\chi \circ \bar{f} \circ \phiGˉ​=χ∘fˉ​∘ϕ where fˉ\bar{f}fˉ​ is an optimal recovery approximation of f†:=φ∘G†∘ψ : Rn→Rmf^\dagger:=\varphi \circ \mathcal{G}^\dagger \circ \psi\,:\,\mathbb{R}^n \to \mathbb{R}^mf†:=φ∘G†∘ψ:Rn→Rm. We show that, even when using vanilla kernels (e.g., linear or Mat\'{e}rn), our approach is competitive in terms of cost-accuracy trade-off and either matches or beats the performance of NN methods on a majority of benchmarks. Additionally, our framework offers several advantages inherited from kernel methods: simplicity, interpretability, convergence guarantees, a priori error estimates, and Bayesian uncertainty quantification. As such, it can serve as a natural benchmark for operator learning.

View on arXiv
Comments on this paper