ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.08045
  4. Cited By
Fast Quantum Algorithm for Attention Computation

Fast Quantum Algorithm for Attention Computation

16 July 2023
Yeqi Gao
Zhao-quan Song
Xin Yang
Ruizhe Zhang
    LRM
ArXivPDFHTML

Papers citing "Fast Quantum Algorithm for Attention Computation"

7 / 7 papers shown
Title
The Expressibility of Polynomial based Attention Scheme
The Expressibility of Polynomial based Attention Scheme
Zhao-quan Song
Guangyi Xu
Junze Yin
32
5
0
30 Oct 2023
How to Protect Copyright Data in Optimization of Large Language Models?
How to Protect Copyright Data in Optimization of Large Language Models?
T. Chu
Zhao-quan Song
Chiwun Yang
32
29
0
23 Aug 2023
An Iterative Algorithm for Rescaled Hyperbolic Functions Regression
An Iterative Algorithm for Rescaled Hyperbolic Functions Regression
Yeqi Gao
Zhao-quan Song
Junze Yin
28
33
0
01 May 2023
Towards provably efficient quantum algorithms for large-scale
  machine-learning models
Towards provably efficient quantum algorithms for large-scale machine-learning models
Junyu Liu
Minzhao Liu
Jin-Peng Liu
Ziyu Ye
Yunfei Wang
Yuri Alexeev
Jens Eisert
Liang Jiang
61
51
0
06 Mar 2023
Bypass Exponential Time Preprocessing: Fast Neural Network Training via
  Weight-Data Correlation Preprocessing
Bypass Exponential Time Preprocessing: Fast Neural Network Training via Weight-Data Correlation Preprocessing
Josh Alman
Jiehao Liang
Zhao-quan Song
Ruizhe Zhang
Danyang Zhuo
71
31
0
25 Nov 2022
Quantum Speedups of Optimizing Approximately Convex Functions with
  Applications to Logarithmic Regret Stochastic Convex Bandits
Quantum Speedups of Optimizing Approximately Convex Functions with Applications to Logarithmic Regret Stochastic Convex Bandits
Tongyang Li
Ruizhe Zhang
16
14
0
26 Sep 2022
Efficient Content-Based Sparse Attention with Routing Transformers
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
243
579
0
12 Mar 2020
1