ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.00428
  4. Cited By
NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust
  Multi-Exit Neural Networks

NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks

1 November 2023
Seokil Ham
Jun-Gyu Park
Dong-Jun Han
Jaekyun Moon
    AAML
ArXivPDFHTML

Papers citing "NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks"

3 / 3 papers shown
Title
Adversarial Training: A Survey
Adversarial Training: A Survey
Mengnan Zhao
Lihe Zhang
Jingwen Ye
Huchuan Lu
Baocai Yin
Xinchao Wang
AAML
38
1
0
19 Oct 2024
The Enemy of My Enemy is My Friend: Exploring Inverse Adversaries for
  Improving Adversarial Training
The Enemy of My Enemy is My Friend: Exploring Inverse Adversaries for Improving Adversarial Training
Junhao Dong
Seyed-Mohsen Moosavi-Dezfooli
Jianhuang Lai
Xiaohua Xie
AAML
50
28
0
01 Nov 2022
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
301
39,238
0
01 Sep 2014
1