ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.03235
79
0

p-Laplacian Transformer

6 November 2023
Tuan Nguyen
Tam Nguyen
Vinh-Tiep Nguyen
Tan-Minh Nguyen
ArXivPDFHTML
Abstract

ppp-Laplacian regularization, rooted in graph and image signal processing, introduces a parameter ppp to control the regularization effect on these data. Smaller values of ppp promote sparsity and interpretability, while larger values encourage smoother solutions. In this paper, we first show that the self-attention mechanism obtains the minimal Laplacian regularization (p=2p=2p=2) and encourages the smoothness in the architecture. However, the smoothness is not suitable for the heterophilic structure of self-attention in transformers where attention weights between tokens that are in close proximity and non-close ones are assigned indistinguishably. From that insight, we then propose a novel class of transformers, namely the ppp-Laplacian Transformer (p-LaT), which leverages ppp-Laplacian regularization framework to harness the heterophilic features within self-attention layers. In particular, low ppp values will effectively assign higher attention weights to tokens that are in close proximity to the current token being processed. We empirically demonstrate the advantages of p-LaT over the baseline transformers on a wide range of benchmark datasets.

View on arXiv
Comments on this paper