ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.00732
  4. Cited By
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report

LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report

29 April 2024
Justin Zhao
Timothy Wang
Wael Abid
Geoffrey Angus
Arnav Garg
Jeffery Kinnison
Alex Sherstinsky
Piero Molino
Travis Addair
Devvret Rishi
    ALM
ArXiv (abs)PDFHTML

Papers citing "LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report"

15 / 15 papers shown
Title
EntityCLIP: Entity-Centric Image-Text Matching via Multimodal Attentive Contrastive Learning
EntityCLIP: Entity-Centric Image-Text Matching via Multimodal Attentive Contrastive Learning
Yaxiong Wang
Yijiao Wang
Lianwei Wu
Lechao Cheng
Zhun Zhong
Meng Wang
VLM
83
0
0
23 Oct 2024
FlexLLM: A System for Co-Serving Large Language Model Inference and Parameter-Efficient Finetuning
FlexLLM: A System for Co-Serving Large Language Model Inference and Parameter-Efficient Finetuning
Xupeng Miao
Gabriele Oliaro
Xinhao Cheng
Vineeth Kada
Ruohan Gao
...
April Yang
Yingcheng Wang
Mengdi Wu
Colin Unger
Zhihao Jia
MoE
156
10
0
29 Feb 2024
Self-seeding and Multi-intent Self-instructing LLMs for Generating Intent-aware Information-Seeking dialogs
Self-seeding and Multi-intent Self-instructing LLMs for Generating Intent-aware Information-Seeking dialogs
Arian Askari
Roxana Petcu
Chuan Meng
Mohammad Aliannejadi
Amin Abolghasemi
Evangelos Kanoulas
Suzan Verberne
104
9
0
18 Feb 2024
Fine-tuning Is a Surprisingly Effective Domain Adaptation Baseline in Handwriting Recognition
Fine-tuning Is a Surprisingly Effective Domain Adaptation Baseline in Handwriting Recognition
Jan Kohút
Michal Hradiš
93
7
0
13 Feb 2023
Learning to Prompt for Continual Learning
Learning to Prompt for Continual Learning
Zifeng Wang
Zizhao Zhang
Chen-Yu Lee
Han Zhang
Ruoxi Sun
Xiaoqi Ren
Guolong Su
Vincent Perot
Jennifer Dy
Tomas Pfister
CLLVPVLMKELMVLM
105
796
0
16 Dec 2021
Training Verifiers to Solve Math Word Problems
Training Verifiers to Solve Math Word Problems
K. Cobbe
V. Kosaraju
Mohammad Bavarian
Mark Chen
Heewoo Jun
...
Jerry Tworek
Jacob Hilton
Reiichiro Nakano
Christopher Hesse
John Schulman
ReLMOffRLLRM
356
4,598
0
27 Oct 2021
To Ship or Not to Ship: An Extensive Evaluation of Automatic Metrics for
  Machine Translation
To Ship or Not to Ship: An Extensive Evaluation of Automatic Metrics for Machine Translation
Tom Kocmi
C. Federmann
Roman Grundkiewicz
Marcin Junczys-Dowmunt
Hitokazu Matsushita
Arul Menezes
93
210
0
22 Jul 2021
Evaluating Large Language Models Trained on Code
Evaluating Large Language Models Trained on Code
Mark Chen
Jerry Tworek
Heewoo Jun
Qiming Yuan
Henrique Pondé
...
Bob McGrew
Dario Amodei
Sam McCandlish
Ilya Sutskever
Wojciech Zaremba
ELMALM
238
5,665
0
07 Jul 2021
LoRA: Low-Rank Adaptation of Large Language Models
LoRA: Low-Rank Adaptation of Large Language Models
J. E. Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Lu Wang
Weizhu Chen
OffRLAI4TSAI4CEALMAIMat
502
10,526
0
17 Jun 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
592
4,093
0
18 Apr 2021
AdapterFusion: Non-Destructive Task Composition for Transfer Learning
AdapterFusion: Non-Destructive Task Composition for Transfer Learning
Jonas Pfeiffer
Aishwarya Kamath
Andreas Rucklé
Kyunghyun Cho
Iryna Gurevych
CLLMoMe
158
859
0
01 May 2020
Ludwig: a type-based declarative deep learning toolbox
Ludwig: a type-based declarative deep learning toolbox
Piero Molino
Yaroslav Dudin
Sai Sumanth Miryala
VLM
65
64
0
17 Sep 2019
To Tune or Not to Tune? Adapting Pretrained Representations to Diverse
  Tasks
To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks
Matthew E. Peters
Sebastian Ruder
Noah A. Smith
86
437
0
14 Mar 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
1.1K
7,201
0
20 Apr 2018
Learning multiple visual domains with residual adapters
Learning multiple visual domains with residual adapters
Sylvestre-Alvise Rebuffi
Hakan Bilen
Andrea Vedaldi
OOD
179
939
0
22 May 2017
1