ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.07381
  4. Cited By
Undistillable: Making A Nasty Teacher That CANNOT teach students

Undistillable: Making A Nasty Teacher That CANNOT teach students

16 May 2021
Haoyu Ma
Tianlong Chen
Ting-Kuei Hu
Chenyu You
Xiaohui Xie
Zhangyang Wang
ArXivPDFHTML

Papers citing "Undistillable: Making A Nasty Teacher That CANNOT teach students"

6 / 6 papers shown
Title
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
Kaito Takanami
Takashi Takahashi
Ayaka Sakata
40
0
0
27 Jan 2025
Adversarial Sparse Teacher: Defense Against Distillation-Based Model
  Stealing Attacks Using Adversarial Examples
Adversarial Sparse Teacher: Defense Against Distillation-Based Model Stealing Attacks Using Adversarial Examples
Eda Yilmaz
H. Keles
AAML
18
2
0
08 Mar 2024
Linearizing Models for Efficient yet Robust Private Inference
Linearizing Models for Efficient yet Robust Private Inference
Sreetama Sarkar
Souvik Kundu
P. Beerel
AAML
13
0
0
08 Feb 2024
Semantics-Preserved Distortion for Personal Privacy Protection in
  Information Management
Semantics-Preserved Distortion for Personal Privacy Protection in Information Management
Jiajia Li
P. Wang
Letian Peng
Shitou Zhang
Xueyi Li
Zuchao Li
Haihui Zhao
29
1
0
04 Jan 2022
Safe Distillation Box
Safe Distillation Box
Jingwen Ye
Yining Mao
Mingli Song
Xinchao Wang
Cheng Jin
Xiuming Zhang
AAML
24
13
0
05 Dec 2021
Distilling portable Generative Adversarial Networks for Image
  Translation
Distilling portable Generative Adversarial Networks for Image Translation
Hanting Chen
Yunhe Wang
Han Shu
Changyuan Wen
Chunjing Xu
Boxin Shi
Chao Xu
Chang Xu
83
83
0
07 Mar 2020
1