ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.00835
  4. Cited By
Low Rank Factorization for Compact Multi-Head Self-Attention

Low Rank Factorization for Compact Multi-Head Self-Attention

26 November 2019
Sneha Mehta
Huzefa Rangwala
Naren Ramakrishnan
ArXivPDFHTML

Papers citing "Low Rank Factorization for Compact Multi-Head Self-Attention"

5 / 5 papers shown
Title
Survey on Computer Vision Techniques for Internet-of-Things Devices
Survey on Computer Vision Techniques for Internet-of-Things Devices
Ishmeet Kaur
Adwaita Janardhan Jadhav
AI4CE
27
1
0
02 Aug 2023
Is Attention Better Than Matrix Decomposition?
Is Attention Better Than Matrix Decomposition?
Zhengyang Geng
Meng-Hao Guo
Hongxu Chen
Xia Li
Ke Wei
Zhouchen Lin
62
138
0
09 Sep 2021
A Decomposable Attention Model for Natural Language Inference
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
213
1,367
0
06 Jun 2016
Effective Approaches to Attention-based Neural Machine Translation
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
218
7,929
0
17 Aug 2015
Convolutional Neural Networks for Sentence Classification
Convolutional Neural Networks for Sentence Classification
Yoon Kim
AILaw
VLM
279
13,373
0
25 Aug 2014
1