ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.01623
  4. Cited By
Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank
  Constraint?

Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?

2 October 2024
Xi Chen
Kaituo Feng
Changsheng Li
Xunhao Lai
Xiangyu Yue
Ye Yuan
Guoren Wang
ArXivPDFHTML

Papers citing "Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?"

2 / 2 papers shown
Title
Memory-Efficient LLM Training by Various-Grained Low-Rank Projection of Gradients
Memory-Efficient LLM Training by Various-Grained Low-Rank Projection of Gradients
Yezhen Wang
Zhouhao Yang
Brian K Chen
Fanyi Pu
Bo-wen Li
Tianyu Gao
Kenji Kawaguchi
43
0
0
03 May 2025
CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation
CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation
Z. Liu
Ruijie Zhang
Zhilin Wang
Zi Yang
Paul Hovland
Bogdan Nicolae
Franck Cappello
Z. Zhang
49
0
0
16 Feb 2025
1