ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.04751
  4. Cited By
How Far Can Camels Go? Exploring the State of Instruction Tuning on Open
  Resources

How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources

7 June 2023
Yizhong Wang
Hamish Ivison
Pradeep Dasigi
Jack Hessel
Tushar Khot
Khyathi Raghavi Chandu
David Wadden
Kelsey MacMillan
Noah A. Smith
Iz Beltagy
Hannaneh Hajishirzi
    ALM
    ELM
ArXivPDFHTML

Papers citing "How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources"

50 / 298 papers shown
Title
CASA: Causality-driven Argument Sufficiency Assessment
CASA: Causality-driven Argument Sufficiency Assessment
Xiao Liu
Yansong Feng
Kai-Wei Chang
35
0
0
10 Jan 2024
LLaMA Pro: Progressive LLaMA with Block Expansion
LLaMA Pro: Progressive LLaMA with Block Expansion
Chengyue Wu
Yukang Gan
Yixiao Ge
Zeyu Lu
Jiahao Wang
Ye Feng
Ying Shan
Ping Luo
CLL
37
61
0
04 Jan 2024
A Comprehensive Analysis of the Effectiveness of Large Language Models
  as Automatic Dialogue Evaluators
A Comprehensive Analysis of the Effectiveness of Large Language Models as Automatic Dialogue Evaluators
Chen Zhang
L. F. D’Haro
Yiming Chen
Malu Zhang
Haizhou Li
ELM
21
29
0
24 Dec 2023
Hazards from Increasingly Accessible Fine-Tuning of Downloadable
  Foundation Models
Hazards from Increasingly Accessible Fine-Tuning of Downloadable Foundation Models
Alan Chan
Ben Bucknall
Herbie Bradley
David M. Krueger
16
6
0
22 Dec 2023
Demystifying Instruction Mixing for Fine-tuning Large Language Models
Demystifying Instruction Mixing for Fine-tuning Large Language Models
Renxi Wang
Haonan Li
Minghao Wu
Yuxia Wang
Xudong Han
Chiyu Zhang
Timothy Baldwin
39
0
0
17 Dec 2023
One-Shot Learning as Instruction Data Prospector for Large Language
  Models
One-Shot Learning as Instruction Data Prospector for Large Language Models
Yunshui Li
Binyuan Hui
Xiaobo Xia
Jiaxi Yang
Min Yang
...
Ling-Hao Chen
Junhao Liu
Tongliang Liu
Fei Huang
Yongbin Li
38
32
0
16 Dec 2023
diff History for Neural Language Agents
diff History for Neural Language Agents
Ulyana Piterbarg
Lerrel Pinto
Rob Fergus
29
3
0
12 Dec 2023
MUFFIN: Curating Multi-Faceted Instructions for Improving
  Instruction-Following
MUFFIN: Curating Multi-Faceted Instructions for Improving Instruction-Following
Renze Lou
Kai Zhang
Jian Xie
Yuxuan Sun
Janice Ahn
Hanzi Xu
Yu Su
Wenpeng Yin
37
26
0
05 Dec 2023
The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context
  Learning
The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
Bill Yuchen Lin
Abhilasha Ravichander
Ximing Lu
Nouha Dziri
Melanie Sclar
Khyathi Raghavi Chandu
Chandra Bhagavatula
Yejin Choi
22
169
0
04 Dec 2023
CoLLiE: Collaborative Training of Large Language Models in an Efficient
  Way
CoLLiE: Collaborative Training of Large Language Models in an Efficient Way
Kai Lv
Shuo Zhang
Tianle Gu
Shuhao Xing
Jiawei Hong
...
Tengxiao Liu
Yu Sun
Penousal Machado
Hang Yan
Xipeng Qiu
46
7
0
01 Dec 2023
MoDS: Model-oriented Data Selection for Instruction Tuning
MoDS: Model-oriented Data Selection for Instruction Tuning
Qianlong Du
Chengqing Zong
Jiajun Zhang
ALM
28
78
0
27 Nov 2023
LIMIT: Less Is More for Instruction Tuning Across Evaluation Paradigms
LIMIT: Less Is More for Instruction Tuning Across Evaluation Paradigms
Aditi Jha
Sam Havens
Jeremey Dohmann
Alex Trott
Jacob P. Portes
ALM
24
11
0
22 Nov 2023
Data Diversity Matters for Robust Instruction Tuning
Data Diversity Matters for Robust Instruction Tuning
Alexander Bukharin
Tuo Zhao
84
36
0
21 Nov 2023
Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2
Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2
Hamish Ivison
Yizhong Wang
Valentina Pyatkin
Nathan Lambert
Matthew E. Peters
...
Joel Jang
David Wadden
Noah A. Smith
Iz Beltagy
Hanna Hajishirzi
ALM
ELM
32
181
0
17 Nov 2023
FollowEval: A Multi-Dimensional Benchmark for Assessing the
  Instruction-Following Capability of Large Language Models
FollowEval: A Multi-Dimensional Benchmark for Assessing the Instruction-Following Capability of Large Language Models
Yimin Jing
Renren Jin
Jiahao Hu
Huishi Qiu
Xiaohua Wang
Peng Wang
Deyi Xiong
LRM
ELM
30
1
0
16 Nov 2023
Symbol-LLM: Towards Foundational Symbol-centric Interface For Large
  Language Models
Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models
Fangzhi Xu
Zhiyong Wu
Qiushi Sun
Siyu Ren
Fei Yuan
Shuai Yuan
Qika Lin
Yu Qiao
Jun Liu
LLMAG
35
33
0
15 Nov 2023
Towards Verifiable Text Generation with Symbolic References
Towards Verifiable Text Generation with Symbolic References
Lucas Torroba Hennigen
Zejiang Shen
Aniruddha Nrusimha
Bernhard Gapp
David Sontag
Yoon Kim
28
11
0
15 Nov 2023
How Well Do Large Language Models Truly Ground?
How Well Do Large Language Models Truly Ground?
Hyunji Lee
Se June Joo
Chaeeun Kim
Joel Jang
Doyoung Kim
Kyoung-Woon On
Minjoon Seo
HILM
44
6
0
15 Nov 2023
PLUG: Leveraging Pivot Language in Cross-Lingual Instruction Tuning
PLUG: Leveraging Pivot Language in Cross-Lingual Instruction Tuning
Zhihan Zhang
Dong-Ho Lee
Yuwei Fang
Wenhao Yu
Mengzhao Jia
Meng Jiang
Francesco Barbieri
ALM
51
28
0
15 Nov 2023
Routing to the Expert: Efficient Reward-guided Ensemble of Large
  Language Models
Routing to the Expert: Efficient Reward-guided Ensemble of Large Language Models
Keming Lu
Hongyi Yuan
Runji Lin
Junyang Lin
Zheng Yuan
Chang Zhou
Jingren Zhou
MoE
LRM
48
52
0
15 Nov 2023
Agent Lumos: Unified and Modular Training for Open-Source Language
  Agents
Agent Lumos: Unified and Modular Training for Open-Source Language Agents
Da Yin
Faeze Brahman
Abhilasha Ravichander
Khyathi Raghavi Chandu
Kai-Wei Chang
Yejin Choi
Bill Yuchen Lin
LLMAG
45
36
0
09 Nov 2023
Rethinking Benchmark and Contamination for Language Models with
  Rephrased Samples
Rethinking Benchmark and Contamination for Language Models with Rephrased Samples
Shuo Yang
Wei-Lin Chiang
Lianmin Zheng
Joseph E. Gonzalez
Ion Stoica
ALM
27
112
0
08 Nov 2023
People Make Better Edits: Measuring the Efficacy of LLM-Generated
  Counterfactually Augmented Data for Harmful Language Detection
People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection
Indira Sen
Dennis Assenmacher
Mattia Samory
Isabelle Augenstein
Wil M.P. van der Aalst
Claudia Wagner
28
19
0
02 Nov 2023
What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Instruction Tuning
What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Instruction Tuning
Yifan Du
Hangyu Guo
Kun Zhou
Wayne Xin Zhao
Jinpeng Wang
Chuyuan Wang
Mingchen Cai
Ruihua Song
Ji-Rong Wen
VLM
MLLM
LRM
78
22
0
02 Nov 2023
Active Instruction Tuning: Improving Cross-Task Generalization by
  Training on Prompt Sensitive Tasks
Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks
Po-Nien Kung
Fan Yin
Di Wu
Kai-Wei Chang
Nanyun Peng
77
40
0
01 Nov 2023
Instructive Decoding: Instruction-Tuned Large Language Models are
  Self-Refiner from Noisy Instructions
Instructive Decoding: Instruction-Tuned Large Language Models are Self-Refiner from Noisy Instructions
Taehyeon Kim
Joonkee Kim
Gihun Lee
Se-Young Yun
33
12
0
01 Nov 2023
Making Large Language Models Better Data Creators
Making Large Language Models Better Data Creators
Dong-Ho Lee
Jay Pujara
Mohit Sewak
Ryen W. White
S. Jauhar
ALM
SyDa
19
23
0
31 Oct 2023
How do Language Models Bind Entities in Context?
How do Language Models Bind Entities in Context?
Jiahai Feng
Jacob Steinhardt
27
35
0
26 Oct 2023
OccuQuest: Mitigating Occupational Bias for Inclusive Large Language
  Models
OccuQuest: Mitigating Occupational Bias for Inclusive Large Language Models
Mingfeng Xue
Dayiheng Liu
Kexin Yang
Guanting Dong
Wenqiang Lei
Zheng Yuan
Chang Zhou
Jingren Zhou
LLMAG
27
2
0
25 Oct 2023
Instruct and Extract: Instruction Tuning for On-Demand Information
  Extraction
Instruct and Extract: Instruction Tuning for On-Demand Information Extraction
Yizhu Jiao
Ming Zhong
Sha Li
Ruining Zhao
Siru Ouyang
Heng Ji
Jiawei Han
41
25
0
24 Oct 2023
MindLLM: Pre-training Lightweight Large Language Model from Scratch,
  Evaluations and Domain Applications
MindLLM: Pre-training Lightweight Large Language Model from Scratch, Evaluations and Domain Applications
Yizhe Yang
Huashan Sun
Jiawei Li
Runheng Liu
Yinghao Li
Yuhang Liu
Heyan Huang
Yang Gao
ALM
LRM
16
8
0
24 Oct 2023
Branch-Solve-Merge Improves Large Language Model Evaluation and
  Generation
Branch-Solve-Merge Improves Large Language Model Evaluation and Generation
Swarnadeep Saha
Omer Levy
Asli Celikyilmaz
Mohit Bansal
Jason Weston
Xian Li
MoMe
43
71
0
23 Oct 2023
Analyzing Multilingual Competency of LLMs in Multi-Turn Instruction
  Following: A Case Study of Arabic
Analyzing Multilingual Competency of LLMs in Multi-Turn Instruction Following: A Case Study of Arabic
Sabri Boughorbel
Majd Hawasly
37
8
0
23 Oct 2023
Large Language Models can Share Images, Too!
Large Language Models can Share Images, Too!
Young-Jun Lee
Dokyong Lee
Joo Won Sung
Jonghwan Hyeon
Ho-Jin Choi
MLLM
26
2
0
23 Oct 2023
Tuna: Instruction Tuning using Feedback from Large Language Models
Tuna: Instruction Tuning using Feedback from Large Language Models
Haoran Li
Yiran Liu
Xingxing Zhang
Wei Lu
Furu Wei
ALM
41
3
0
20 Oct 2023
AgentTuning: Enabling Generalized Agent Abilities for LLMs
AgentTuning: Enabling Generalized Agent Abilities for LLMs
Aohan Zeng
Mingdao Liu
Rui Lu
Bowen Wang
Xiao Liu
Yuxiao Dong
Jie Tang
LM&MA
ALM
LLMAG
32
161
0
19 Oct 2023
GraphGPT: Graph Instruction Tuning for Large Language Models
GraphGPT: Graph Instruction Tuning for Large Language Models
Jiabin Tang
Yuhao Yang
Wei Wei
Lei Shi
Lixin Su
Suqi Cheng
Dawei Yin
Chao Huang
39
123
0
19 Oct 2023
Personalized Soups: Personalized Large Language Model Alignment via
  Post-hoc Parameter Merging
Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging
Joel Jang
Seungone Kim
Bill Yuchen Lin
Yizhong Wang
Jack Hessel
Luke Zettlemoyer
Hannaneh Hajishirzi
Yejin Choi
Prithviraj Ammanabrolu
MoMe
59
134
0
17 Oct 2023
Self-RAG: Learning to Retrieve, Generate, and Critique through
  Self-Reflection
Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
Akari Asai
Zeqiu Wu
Yizhong Wang
Avirup Sil
Hannaneh Hajishirzi
RALM
176
656
0
17 Oct 2023
Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from
  a Parametric Perspective
Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective
Ming Zhong
Chenxin An
Weizhu Chen
Jiawei Han
Pengcheng He
31
9
0
17 Oct 2023
In-context Pretraining: Language Modeling Beyond Document Boundaries
In-context Pretraining: Language Modeling Beyond Document Boundaries
Weijia Shi
Sewon Min
Maria Lomeli
Chunting Zhou
Margaret Li
...
Victoria Lin
Noah A. Smith
Luke Zettlemoyer
Scott Yih
Mike Lewis
LRM
RALM
SyDa
34
48
0
16 Oct 2023
Instruction Tuning with Human Curriculum
Instruction Tuning with Human Curriculum
Bruce W. Lee
Hyunsoo Cho
Kang Min Yoo
50
3
0
14 Oct 2023
Towards the Fundamental Limits of Knowledge Transfer over Finite Domains
Towards the Fundamental Limits of Knowledge Transfer over Finite Domains
Qingyue Zhao
Banghua Zhu
41
4
0
11 Oct 2023
InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining
InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining
Wei Ping
Ming-Yu Liu
Lawrence C. McAfee
Peng Xu
Bo Li
M. Shoeybi
Bryan Catanzaro
RALM
18
47
0
11 Oct 2023
Lemur: Harmonizing Natural Language and Code for Language Agents
Lemur: Harmonizing Natural Language and Code for Language Agents
Yiheng Xu
Hongjin Su
Chen Xing
Boyu Mi
Qian Liu
...
Siheng Zhao
Lingpeng Kong
Bailin Wang
Caiming Xiong
Tao Yu
40
68
0
10 Oct 2023
SALMON: Self-Alignment with Instructable Reward Models
SALMON: Self-Alignment with Instructable Reward Models
Zhiqing Sun
Songlin Yang
Hongxin Zhang
Qinhong Zhou
Zhenfang Chen
David D. Cox
Yiming Yang
Chuang Gan
ALM
SyDa
43
35
0
09 Oct 2023
How Abilities in Large Language Models are Affected by Supervised
  Fine-tuning Data Composition
How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition
Guanting Dong
Hongyi Yuan
Keming Lu
Chengpeng Li
Mingfeng Xue
Dayiheng Liu
Wei Wang
Zheng Yuan
Chang Zhou
Jingren Zhou
LRM
CLL
36
121
0
09 Oct 2023
Establishing Trustworthiness: Rethinking Tasks and Model Evaluation
Establishing Trustworthiness: Rethinking Tasks and Model Evaluation
Robert Litschko
Max Müller-Eberstein
Rob van der Goot
Leon Weber
Barbara Plank
LRM
31
2
0
09 Oct 2023
SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to
  RLHF
SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF
Yi Dong
Zhilin Wang
Makesh Narsimhan Sreedhar
Xianchao Wu
Oleksii Kuchaiev
ALM
LLMSV
47
65
0
09 Oct 2023
Resprompt: Residual Connection Prompting Advances Multi-Step Reasoning
  in Large Language Models
Resprompt: Residual Connection Prompting Advances Multi-Step Reasoning in Large Language Models
Song Jiang
Zahra Shakeri
Aaron Chan
Maziar Sanjabi
Hamed Firooz
...
Bugra Akyildiz
Yizhou Sun
Jinchao Li
Qifan Wang
Asli Celikyilmaz
LRM
ReLM
26
8
0
07 Oct 2023
Previous
123456
Next