ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.11531
  4. Cited By
Pack of LLMs: Model Fusion at Test-Time via Perplexity Optimization

Pack of LLMs: Model Fusion at Test-Time via Perplexity Optimization

17 April 2024
Costas Mavromatis
Petros Karypis
George Karypis
    MoMe
ArXivPDFHTML

Papers citing "Pack of LLMs: Model Fusion at Test-Time via Perplexity Optimization"

12 / 12 papers shown
Title
Do We Truly Need So Many Samples? Multi-LLM Repeated Sampling Efficiently Scales Test-Time Compute
Do We Truly Need So Many Samples? Multi-LLM Repeated Sampling Efficiently Scales Test-Time Compute
Jianhao Chen
Zishuo Xun
Bocheng Zhou
Han Qi
Qiaosheng Zhang
...
Wei Hu
Yuzhong Qu
W. Ouyang
Wanli Ouyang
Shuyue Hu
109
2
0
01 Apr 2025
Harnessing Multiple Large Language Models: A Survey on LLM Ensemble
Harnessing Multiple Large Language Models: A Survey on LLM Ensemble
Zhijun Chen
Jingzheng Li
Pengpeng Chen
Zhuoran Li
Kai Sun
Yuankai Luo
Qianren Mao
Dingqi Yang
Hailong Sun
Philip S. Yu
ELM
102
15
0
25 Feb 2025
SimPER: A Minimalist Approach to Preference Alignment without Hyperparameters
SimPER: A Minimalist Approach to Preference Alignment without Hyperparameters
Teng Xiao
Yige Yuan
Ziyang Chen
Mingxiao Li
Shangsong Liang
Zhaochun Ren
V. Honavar
187
8
0
21 Feb 2025
DFPE: A Diverse Fingerprint Ensemble for Enhancing LLM Performance
DFPE: A Diverse Fingerprint Ensemble for Enhancing LLM Performance
Seffi Cohen
Niv Goldshlager
Nurit Cohen-Inger
Bracha Shapira
Lior Rokach
105
0
0
29 Jan 2025
Purifying Large Language Models by Ensembling a Small Language Model
Purifying Large Language Models by Ensembling a Small Language Model
Tianlin Li
Qian Liu
Tianyu Pang
Chao Du
Qing Guo
Yang Liu
Min Lin
64
18
0
19 Feb 2024
Fusing Models with Complementary Expertise
Fusing Models with Complementary Expertise
Hongyi Wang
Felipe Maia Polo
Yuekai Sun
Souvik Kundu
Eric Xing
Mikhail Yurochkin
FedML
MoMe
67
29
0
02 Oct 2023
Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints
Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints
Aran Komatsuzaki
J. Puigcerver
James Lee-Thorp
Carlos Riquelme Ruiz
Basil Mustafa
Joshua Ainslie
Yi Tay
Mostafa Dehghani
N. Houlsby
MoMe
MoE
36
114
0
09 Dec 2022
Demystifying Prompts in Language Models via Perplexity Estimation
Demystifying Prompts in Language Models via Perplexity Estimation
Hila Gonen
Srini Iyer
Terra Blevins
Noah A. Smith
Luke Zettlemoyer
LRM
115
210
0
08 Dec 2022
DExperts: Decoding-Time Controlled Text Generation with Experts and
  Anti-Experts
DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts
Alisa Liu
Maarten Sap
Ximing Lu
Swabha Swayamdipta
Chandra Bhagavatula
Noah A. Smith
Yejin Choi
MU
98
371
0
07 May 2021
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
369
20,053
0
23 Oct 2019
PubMedQA: A Dataset for Biomedical Research Question Answering
PubMedQA: A Dataset for Biomedical Research Question Answering
Qiao Jin
Bhuwan Dhingra
Zhengping Liu
William W. Cohen
Xinghua Lu
353
883
0
13 Sep 2019
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions
Christopher Clark
Kenton Lee
Ming-Wei Chang
Tom Kwiatkowski
Michael Collins
Kristina Toutanova
205
1,511
0
24 May 2019
1