ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.06711
  4. Cited By
Weighted Distillation with Unlabeled Examples

Weighted Distillation with Unlabeled Examples

13 October 2022
Fotis Iliopoulos
Vasilis Kontonis
Cenk Baykal
Gaurav Menghani
Khoa Trinh
Erik Vee
ArXivPDFHTML

Papers citing "Weighted Distillation with Unlabeled Examples"

14 / 14 papers shown
Title
Robust RL with LLM-Driven Data Synthesis and Policy Adaptation for
  Autonomous Driving
Robust RL with LLM-Driven Data Synthesis and Policy Adaptation for Autonomous Driving
Sihao Wu
Jiaxu Liu
Xiangyu Yin
Guangliang Cheng
Xingyu Zhao
Meng Fang
Xinping Yi
Xiaowei Huang
30
0
0
16 Oct 2024
Linear Projections of Teacher Embeddings for Few-Class Distillation
Linear Projections of Teacher Embeddings for Few-Class Distillation
Noel Loo
Fotis Iliopoulos
Wei Hu
Erik Vee
30
0
0
30 Sep 2024
How to Train the Teacher Model for Effective Knowledge Distillation
How to Train the Teacher Model for Effective Knowledge Distillation
Shayan Mohajer Hamidi
Xizhen Deng
Renhao Tan
Linfeng Ye
Ahmed H. Salamah
37
2
0
25 Jul 2024
Bayes Conditional Distribution Estimation for Knowledge Distillation
  Based on Conditional Mutual Information
Bayes Conditional Distribution Estimation for Knowledge Distillation Based on Conditional Mutual Information
Linfeng Ye
Shayan Mohajer Hamidi
Renhao Tan
En-Hui Yang
VLM
37
12
0
16 Jan 2024
Student as an Inherent Denoiser of Noisy Teacher
Student as an Inherent Denoiser of Noisy Teacher
Jiachen Zhao
24
0
0
15 Dec 2023
MyriadAL: Active Few Shot Learning for Histopathology
MyriadAL: Active Few Shot Learning for Histopathology
Nico Schiavone
Jingyi Wang
Shuangzhi Li
Roger J. Zemp
Xingyu Li
13
1
0
24 Oct 2023
Towards an On-device Agent for Text Rewriting
Towards an On-device Agent for Text Rewriting
Yun Zhu
Yinxiao Liu
Felix Stahlberg
Shankar Kumar
Yu-hui Chen
Liangchen Luo
Lei Shu
Renjie Liu
Jindong Chen
Lei Meng
LLMAG
32
6
0
22 Aug 2023
Distilling Step-by-Step! Outperforming Larger Language Models with Less
  Training Data and Smaller Model Sizes
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Lokesh Nagalapatti
Chun-Liang Li
Chih-Kuan Yeh
Hootan Nakhost
Yasuhisa Fujii
Alexander Ratner
Ranjay Krishna
Chen-Yu Lee
Tomas Pfister
ALM
220
502
0
03 May 2023
SLaM: Student-Label Mixing for Distillation with Unlabeled Examples
SLaM: Student-Label Mixing for Distillation with Unlabeled Examples
Vasilis Kontonis
Fotis Iliopoulos
Khoa Trinh
Cenk Baykal
Gaurav Menghani
Erik Vee
26
7
0
08 Feb 2023
On student-teacher deviations in distillation: does it pay to disobey?
On student-teacher deviations in distillation: does it pay to disobey?
Vaishnavh Nagarajan
A. Menon
Srinadh Bhojanapalli
H. Mobahi
Surinder Kumar
43
9
0
30 Jan 2023
Meta Pseudo Labels
Meta Pseudo Labels
Hieu H. Pham
Zihang Dai
Qizhe Xie
Minh-Thang Luong
Quoc V. Le
VLM
262
656
0
23 Mar 2020
Confidence Regularized Self-Training
Confidence Regularized Self-Training
Yang Zou
Zhiding Yu
Xiaofeng Liu
B. Kumar
Jinsong Wang
233
789
0
26 Aug 2019
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,567
0
17 Apr 2017
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
296
39,198
0
01 Sep 2014
1