ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.12487
  4. Cited By
Improved Operator Learning by Orthogonal Attention

Improved Operator Learning by Orthogonal Attention

19 October 2023
Zipeng Xiao
Zhongkai Hao
Bokai Lin
Zhijie Deng
Hang Su
ArXivPDFHTML

Papers citing "Improved Operator Learning by Orthogonal Attention"

12 / 12 papers shown
Title
Harnessing Scale and Physics: A Multi-Graph Neural Operator Framework for PDEs on Arbitrary Geometries
Harnessing Scale and Physics: A Multi-Graph Neural Operator Framework for PDEs on Arbitrary Geometries
Zhihao Li
Haoze Song
Di Xiao
Zhilu Lai
Wei Wang
AI4CE
202
4
0
18 Nov 2024
Continuous Spatiotemporal Transformers
Continuous Spatiotemporal Transformers
Antonio H. O. Fonseca
E. Zappala
J. O. Caro
David van Dijk
48
7
0
31 Jan 2023
Transformer for Partial Differential Equations' Operator Learning
Transformer for Partial Differential Equations' Operator Learning
Zijie Li
Kazem Meidani
A. Farimani
85
159
0
26 May 2022
Neural Operator: Learning Maps Between Function Spaces
Neural Operator: Learning Maps Between Function Spaces
Nikola B. Kovachki
Zong-Yi Li
Burigede Liu
Kamyar Azizzadenesheli
K. Bhattacharya
Andrew M. Stuart
Anima Anandkumar
AI4CE
99
446
0
19 Aug 2021
Fourier Neural Operator for Parametric Partial Differential Equations
Fourier Neural Operator for Parametric Partial Differential Equations
Zong-Yi Li
Nikola B. Kovachki
Kamyar Azizzadenesheli
Burigede Liu
K. Bhattacharya
Andrew M. Stuart
Anima Anandkumar
AI4CE
468
2,384
0
18 Oct 2020
Big Bird: Transformers for Longer Sequences
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
502
2,074
0
28 Jul 2020
Transformers are RNNs: Fast Autoregressive Transformers with Linear
  Attention
Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
Angelos Katharopoulos
Apoorv Vyas
Nikolaos Pappas
Franccois Fleuret
166
1,755
0
29 Jun 2020
Linformer: Self-Attention with Linear Complexity
Linformer: Self-Attention with Linear Complexity
Sinong Wang
Belinda Z. Li
Madian Khabsa
Han Fang
Hao Ma
185
1,694
0
08 Jun 2020
Generating Long Sequences with Sparse Transformers
Generating Long Sequences with Sparse Transformers
R. Child
Scott Gray
Alec Radford
Ilya Sutskever
97
1,894
0
23 Apr 2019
Image Transformer
Image Transformer
Niki Parmar
Ashish Vaswani
Jakob Uszkoreit
Lukasz Kaiser
Noam M. Shazeer
Alexander Ku
Dustin Tran
ViT
118
1,678
0
15 Feb 2018
Layer Normalization
Layer Normalization
Jimmy Lei Ba
J. Kiros
Geoffrey E. Hinton
346
10,467
0
21 Jul 2016
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
430
43,234
0
11 Feb 2015
1