ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.05225
  4. Cited By
LEAD: Liberal Feature-based Distillation for Dense Retrieval
v1v2 (latest)

LEAD: Liberal Feature-based Distillation for Dense Retrieval

10 December 2022
Hao Sun
Xiao Liu
Yeyun Gong
Anlei Dong
Jing Lu
Yan Zhang
Linjun Yang
Rangan Majumder
Nan Duan
ArXiv (abs)PDFHTMLGithub (112★)

Papers citing "LEAD: Liberal Feature-based Distillation for Dense Retrieval"

42 / 42 papers shown
Title
CAPSTONE: Curriculum Sampling for Dense Retrieval with Document
  Expansion
CAPSTONE: Curriculum Sampling for Dense Retrieval with Document Expansion
Xingwei He
Yeyun Gong
Alex Jin
Hang Zhang
Anlei Dong
Jian Jiao
Siu-Ming Yiu
Nan Duan
RALM
59
3
0
18 Dec 2022
SimANS: Simple Ambiguous Negatives Sampling for Dense Text Retrieval
SimANS: Simple Ambiguous Negatives Sampling for Dense Text Retrieval
Kun Zhou
Yeyun Gong
Xiao Liu
Wayne Xin Zhao
Yelong Shen
...
Jing Lu
Rangan Majumder
Ji-Rong Wen
Nan Duan
Weizhu Chen
65
35
0
21 Oct 2022
PROD: Progressive Distillation for Dense Retrieval
PROD: Progressive Distillation for Dense Retrieval
Zhenghao Lin
Yeyun Gong
Xiao Liu
Hang Zhang
Chen Lin
...
Jian Jiao
Jing Lu
Daxin Jiang
Rangan Majumder
Nan Duan
76
27
0
27 Sep 2022
ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self
  On-the-fly Distillation for Dense Passage Retrieval
ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval
Yuxiang Lu
Yiding Liu
Jiaxiang Liu
Yunsheng Shi
Zhengjie Huang
...
Hao Tian
Hua Wu
Shuaiqiang Wang
Dawei Yin
Haifeng Wang
158
60
0
18 May 2022
Curriculum Learning for Dense Retrieval Distillation
Curriculum Learning for Dense Retrieval Distillation
Hansi Zeng
Hamed Zamani
Vishwa Vinay
50
51
0
28 Apr 2022
Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A
  Large-Scale Generative Language Model
Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model
Shaden Smith
M. Patwary
Brandon Norick
P. LeGresley
Samyam Rajbhandari
...
Mohammad Shoeybi
Yuxiong He
Michael Houston
Saurabh Tiwary
Bryan Catanzaro
MoE
151
741
0
28 Jan 2022
Large Dual Encoders Are Generalizable Retrievers
Large Dual Encoders Are Generalizable Retrievers
Jianmo Ni
Chen Qu
Jing Lu
Zhuyun Dai
Gustavo Hernández Ábrego
...
Vincent Zhao
Yi Luan
Keith B. Hall
Ming-Wei Chang
Yinfei Yang
DML
163
453
0
15 Dec 2021
RocketQAv2: A Joint Training Method for Dense Passage Retrieval and
  Passage Re-ranking
RocketQAv2: A Joint Training Method for Dense Passage Retrieval and Passage Re-ranking
Ruiyang Ren
Yingqi Qu
Jing Liu
Wayne Xin Zhao
Qiaoqiao She
Hua Wu
Haifeng Wang
Ji-Rong Wen
265
255
0
14 Oct 2021
Adversarial Retriever-Ranker for dense text retrieval
Adversarial Retriever-Ranker for dense text retrieval
Hang Zhang
Yeyun Gong
Yelong Shen
Jiancheng Lv
Nan Duan
Weizhu Chen
VLMRALM
90
118
0
07 Oct 2021
RAIL-KD: RAndom Intermediate Layer Mapping for Knowledge Distillation
RAIL-KD: RAndom Intermediate Layer Mapping for Knowledge Distillation
Md. Akmal Haidar
Nithin Anchuri
Mehdi Rezagholizadeh
Abbas Ghaddar
Philippe Langlais
Pascal Poupart
70
22
0
21 Sep 2021
Simple Entity-Centric Questions Challenge Dense Retrievers
Simple Entity-Centric Questions Challenge Dense Retrievers
Christopher Sciavolino
Zexuan Zhong
Jinhyuk Lee
Danqi Chen
RALM
78
167
0
17 Sep 2021
PAIR: Leveraging Passage-Centric Similarity Relation for Improving Dense
  Passage Retrieval
PAIR: Leveraging Passage-Centric Similarity Relation for Improving Dense Passage Retrieval
Ruiyang Ren
Shangwen Lv
Yingqi Qu
Jing Liu
Wayne Xin Zhao
Qiaoqiao She
Hua Wu
Haifeng Wang
Ji-Rong Wen
163
94
0
13 Aug 2021
Unsupervised Corpus Aware Language Model Pre-training for Dense Passage
  Retrieval
Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval
Luyu Gao
Jamie Callan
RALM
275
336
0
12 Aug 2021
Towards Understanding Knowledge Distillation
Towards Understanding Knowledge Distillation
Mary Phuong
Christoph H. Lampert
65
319
0
27 May 2021
Condenser: a Pre-training Architecture for Dense Retrieval
Condenser: a Pre-training Architecture for Dense Retrieval
Luyu Gao
Jamie Callan
AI4CE
59
262
0
16 Apr 2021
COIL: Revisit Exact Lexical Match in Information Retrieval with
  Contextualized Inverted List
COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List
Luyu Gao
Zhuyun Dai
Jamie Callan
63
218
0
15 Apr 2021
Efficiently Teaching an Effective Dense Retriever with Balanced Topic
  Aware Sampling
Efficiently Teaching an Effective Dense Retriever with Balanced Topic Aware Sampling
Sebastian Hofstatter
Sheng-Chieh Lin
Jheng-Hong Yang
Jimmy J. Lin
Allan Hanbury
VLM
78
400
0
14 Apr 2021
Overview of the TREC 2020 deep learning track
Overview of the TREC 2020 deep learning track
Nick Craswell
Bhaskar Mitra
Emine Yilmaz
Daniel Fernando Campos
115
387
0
15 Feb 2021
ALP-KD: Attention-Based Layer Projection for Knowledge Distillation
ALP-KD: Attention-Based Layer Projection for Knowledge Distillation
Peyman Passban
Yimeng Wu
Mehdi Rezagholizadeh
Qun Liu
62
122
0
27 Dec 2020
Distilling Dense Representations for Ranking using Tightly-Coupled
  Teachers
Distilling Dense Representations for Ranking using Tightly-Coupled Teachers
Sheng-Chieh Lin
Jheng-Hong Yang
Jimmy J. Lin
57
122
0
22 Oct 2020
RocketQA: An Optimized Training Approach to Dense Passage Retrieval for
  Open-Domain Question Answering
RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering
Yingqi Qu
Yuchen Ding
Jing Liu
Kai Liu
Ruiyang Ren
Xin Zhao
Daxiang Dong
Hua Wu
Haifeng Wang
RALMOffRL
253
617
0
16 Oct 2020
Approximate Nearest Neighbor Negative Contrastive Learning for Dense
  Text Retrieval
Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval
Lee Xiong
Chenyan Xiong
Ye Li
Kwok-Fung Tang
Jialin Liu
Paul N. Bennett
Junaid Ahmed
Arnold Overwijk
137
1,225
0
01 Jul 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
91
2,960
0
09 Jun 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
795
42,055
0
28 May 2020
Sparse, Dense, and Attentional Representations for Text Retrieval
Sparse, Dense, and Attentional Representations for Text Retrieval
Y. Luan
Jacob Eisenstein
Kristina Toutanova
M. Collins
66
408
0
01 May 2020
ColBERT: Efficient and Effective Passage Search via Contextualized Late
  Interaction over BERT
ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT
Omar Khattab
Matei A. Zaharia
138
1,370
0
27 Apr 2020
Dense Passage Retrieval for Open-Domain Question Answering
Dense Passage Retrieval for Open-Domain Question Answering
Vladimir Karpukhin
Barlas Oğuz
Sewon Min
Patrick Lewis
Ledell Yu Wu
Sergey Edunov
Danqi Chen
Wen-tau Yih
RALM
195
3,762
0
10 Apr 2020
Overview of the TREC 2019 deep learning track
Overview of the TREC 2019 deep learning track
Nick Craswell
Bhaskar Mitra
Emine Yilmaz
Daniel Fernando Campos
E. Voorhees
234
494
0
17 Mar 2020
Understanding and Improving Knowledge Distillation
Understanding and Improving Knowledge Distillation
Jiaxi Tang
Rakesh Shivanna
Zhe Zhao
Dong Lin
Anima Singh
Ed H. Chi
Sagar Jain
83
131
0
10 Feb 2020
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and
  lighter
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Victor Sanh
Lysandre Debut
Julien Chaumond
Thomas Wolf
232
7,520
0
02 Oct 2019
TinyBERT: Distilling BERT for Natural Language Understanding
TinyBERT: Distilling BERT for Natural Language Understanding
Xiaoqi Jiao
Yichun Yin
Lifeng Shang
Xin Jiang
Xiao Chen
Linlin Li
F. Wang
Qun Liu
VLM
105
1,860
0
23 Sep 2019
Patient Knowledge Distillation for BERT Model Compression
Patient Knowledge Distillation for BERT Model Compression
S. Sun
Yu Cheng
Zhe Gan
Jingjing Liu
134
837
0
25 Aug 2019
Deeper Text Understanding for IR with Contextual Neural Language
  Modeling
Deeper Text Understanding for IR with Contextual Neural Language Modeling
Zhuyun Dai
Jamie Callan
57
448
0
22 May 2019
Document Expansion by Query Prediction
Document Expansion by Query Prediction
Rodrigo Nogueira
Wei Yang
Jimmy J. Lin
Kyunghyun Cho
114
415
0
17 Apr 2019
Improved Knowledge Distillation via Teacher Assistant
Improved Knowledge Distillation via Teacher Assistant
Seyed Iman Mirzadeh
Mehrdad Farajtabar
Ang Li
Nir Levine
Akihiro Matsukawa
H. Ghasemzadeh
98
1,075
0
09 Feb 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLMSSLSSeg
1.8K
94,891
0
11 Oct 2018
Learning Deep Representations with Probabilistic Knowledge Transfer
Learning Deep Representations with Probabilistic Knowledge Transfer
Nikolaos Passalis
Anastasios Tefas
61
412
0
28 Mar 2018
Paraphrasing Complex Network: Network Compression via Factor Transfer
Paraphrasing Complex Network: Network Compression via Factor Transfer
Jangho Kim
Seonguk Park
Nojun Kwak
74
550
0
14 Feb 2018
Paying More Attention to Attention: Improving the Performance of
  Convolutional Neural Networks via Attention Transfer
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Sergey Zagoruyko
N. Komodakis
118
2,581
0
12 Dec 2016
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
Payal Bajaj
Daniel Fernando Campos
Nick Craswell
Li Deng
Jianfeng Gao
...
Mir Rosenberg
Xia Song
Alina Stoica
Saurabh Tiwary
Tong Wang
RALM
139
2,728
0
28 Nov 2016
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
362
19,660
0
09 Mar 2015
FitNets: Hints for Thin Deep Nets
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
308
3,887
0
19 Dec 2014
1