ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.07335
  4. Cited By
LRC-BERT: Latent-representation Contrastive Knowledge Distillation for
  Natural Language Understanding

LRC-BERT: Latent-representation Contrastive Knowledge Distillation for Natural Language Understanding

14 December 2020
Hao Fu
Shaojun Zhou
Qihong Yang
Junjie Tang
Guiquan Liu
Kaikui Liu
Xiaolong Li
ArXivPDFHTML

Papers citing "LRC-BERT: Latent-representation Contrastive Knowledge Distillation for Natural Language Understanding"

27 / 27 papers shown
Title
ToXCL: A Unified Framework for Toxic Speech Detection and Explanation
ToXCL: A Unified Framework for Toxic Speech Detection and Explanation
Nhat M. Hoang
Do Xuan Long
Duc Anh Do
Duc Anh Vu
Anh Tuan Luu
39
4
0
25 Mar 2024
On the Road to Portability: Compressing End-to-End Motion Planner for
  Autonomous Driving
On the Road to Portability: Compressing End-to-End Motion Planner for Autonomous Driving
Kaituo Feng
Changsheng Li
Dongchun Ren
Ye Yuan
Guoren Wang
29
6
0
02 Mar 2024
Modeling Balanced Explicit and Implicit Relations with Contrastive
  Learning for Knowledge Concept Recommendation in MOOCs
Modeling Balanced Explicit and Implicit Relations with Contrastive Learning for Knowledge Concept Recommendation in MOOCs
Hengnian Gu
Zhiyi Duan
Pan Xie
Dongdai Zhou
AI4Ed
19
1
0
13 Feb 2024
A Comprehensive Survey of Compression Algorithms for Language Models
A Comprehensive Survey of Compression Algorithms for Language Models
Seungcheol Park
Jaehyeon Choi
Sojin Lee
U. Kang
MQ
24
12
0
27 Jan 2024
Object Attribute Matters in Visual Question Answering
Object Attribute Matters in Visual Question Answering
Peize Li
Q. Si
Peng Fu
Zheng Lin
Yan Wang
33
0
0
20 Dec 2023
Finding Order in Chaos: A Novel Data Augmentation Method for Time Series
  in Contrastive Learning
Finding Order in Chaos: A Novel Data Augmentation Method for Time Series in Contrastive Learning
B. U. Demirel
Christian Holz
AI4TS
24
18
0
23 Sep 2023
On the Sweet Spot of Contrastive Views for Knowledge-enhanced
  Recommendation
On the Sweet Spot of Contrastive Views for Knowledge-enhanced Recommendation
Haibo Ye
Xinjie Li
Yuan Yao
Hanghang Tong
36
0
0
23 Sep 2023
$\rm SP^3$: Enhancing Structured Pruning via PCA Projection
SP3\rm SP^3SP3: Enhancing Structured Pruning via PCA Projection
Yuxuan Hu
Jing Zhang
Zhe Zhao
Chengliang Zhao
Xiaodong Chen
Cuiping Li
Hong Chen
28
1
0
31 Aug 2023
Learning to Distill Global Representation for Sparse-View CT
Learning to Distill Global Representation for Sparse-View CT
Zilong Li
Chenglong Ma
Jie Chen
Junping Zhang
Hongming Shan
21
9
0
16 Aug 2023
Online Distillation for Pseudo-Relevance Feedback
Online Distillation for Pseudo-Relevance Feedback
Sean MacAvaney
Xi Wang
16
2
0
16 Jun 2023
GKD: A General Knowledge Distillation Framework for Large-scale
  Pre-trained Language Model
GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model
Shicheng Tan
Weng Lam Tam
Yuanchun Wang
Wenwen Gong
Yang Yang
...
Jiahao Liu
Jingang Wang
Shuo Zhao
Peng-Zhen Zhang
Jie Tang
ALM
MoE
19
11
0
11 Jun 2023
Are Intermediate Layers and Labels Really Necessary? A General Language
  Model Distillation Method
Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method
Shicheng Tan
Weng Lam Tam
Yuanchun Wang
Wenwen Gong
Shuo Zhao
Peng-Zhen Zhang
Jie Tang
VLM
17
1
0
11 Jun 2023
Online Continual Learning via the Knowledge Invariant and Spread-out
  Properties
Online Continual Learning via the Knowledge Invariant and Spread-out Properties
Ya-nan Han
Jian-wei Liu
CLL
30
7
0
02 Feb 2023
Knowledge Transfer from Pre-trained Language Models to Cif-based Speech
  Recognizers via Hierarchical Distillation
Knowledge Transfer from Pre-trained Language Models to Cif-based Speech Recognizers via Hierarchical Distillation
Minglun Han
Feilong Chen
Jing Shi
Shuang Xu
Bo Xu
VLM
38
11
0
30 Jan 2023
Towards a Holistic Understanding of Mathematical Questions with
  Contrastive Pre-training
Towards a Holistic Understanding of Mathematical Questions with Contrastive Pre-training
Yuting Ning
Zhenya Huang
Xin Lin
Enhong Chen
Shiwei Tong
Zheng Gong
Shijin Wang
AIMat
37
6
0
18 Jan 2023
Knowledge Enhancement for Contrastive Multi-Behavior Recommendation
Knowledge Enhancement for Contrastive Multi-Behavior Recommendation
Hongrui Xuan
Yi Liu
Bohan Li
Hongzhi Yin
24
66
0
13 Jan 2023
cViL: Cross-Lingual Training of Vision-Language Models using Knowledge
  Distillation
cViL: Cross-Lingual Training of Vision-Language Models using Knowledge Distillation
Kshitij Gupta
Devansh Gautam
R. Mamidi
VLM
22
3
0
07 Jun 2022
Knowledge Graph Contrastive Learning for Recommendation
Knowledge Graph Contrastive Learning for Recommendation
Yuhao Yang
Chao Huang
Lianghao Xia
Chenliang Li
24
321
0
02 May 2022
CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge
  Distillation
CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation
Md. Akmal Haidar
Mehdi Rezagholizadeh
Abbas Ghaddar
Khalil Bibi
Philippe Langlais
Pascal Poupart
CLL
25
6
0
15 Apr 2022
Contrastive Meta Learning with Behavior Multiplicity for Recommendation
Contrastive Meta Learning with Behavior Multiplicity for Recommendation
Wei Wei
Chao Huang
Lianghao Xia
Yong-mei Xu
Jiashu Zhao
Dawei Yin
29
159
0
17 Feb 2022
From Consensus to Disagreement: Multi-Teacher Distillation for
  Semi-Supervised Relation Extraction
From Consensus to Disagreement: Multi-Teacher Distillation for Semi-Supervised Relation Extraction
Wanli Li
T. Qian
15
2
0
02 Dec 2021
Investigating the Role of Negatives in Contrastive Representation
  Learning
Investigating the Role of Negatives in Contrastive Representation Learning
Jordan T. Ash
Surbhi Goel
A. Krishnamurthy
Dipendra Kumar Misra
SSL
21
49
0
18 Jun 2021
XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation
XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation
Subhabrata Mukherjee
Ahmed Hassan Awadallah
Jianfeng Gao
17
22
0
08 Jun 2021
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,835
0
09 Jun 2020
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
243
1,450
0
18 Mar 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,956
0
20 Apr 2018
Efficient Estimation of Word Representations in Vector Space
Efficient Estimation of Word Representations in Vector Space
Tomáš Mikolov
Kai Chen
G. Corrado
J. Dean
3DV
233
31,253
0
16 Jan 2013
1