LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models
  Fine-tuning

LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning

Papers citing "LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning"

35 / 35 papers shown
Title
8-bit Optimizers via Block-wise Quantization
8-bit Optimizers via Block-wise Quantization
Tim Dettmers
M. Lewis
Sam Shleifer
Luke Zettlemoyer
130
303
0
06 Oct 2021

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.