ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.11355
  4. Cited By
Blind Knowledge Distillation for Robust Image Classification

Blind Knowledge Distillation for Robust Image Classification

21 November 2022
Timo Kaiser
Lukas Ehmann
Christoph Reinders
Bodo Rosenhahn
    NoLa
ArXivPDFHTML

Papers citing "Blind Knowledge Distillation for Robust Image Classification"

8 / 8 papers shown
Title
UncertainSAM: Fast and Efficient Uncertainty Quantification of the Segment Anything Model
UncertainSAM: Fast and Efficient Uncertainty Quantification of the Segment Anything Model
Timo Kaiser
Thomas Norrenbrock
Bodo Rosenhahn
53
0
0
08 May 2025
ANNE: Adaptive Nearest Neighbors and Eigenvector-based Sample Selection
  for Robust Learning with Noisy Labels
ANNE: Adaptive Nearest Neighbors and Eigenvector-based Sample Selection for Robust Learning with Noisy Labels
F. Cordeiro
G. Carneiro
NoLa
45
1
0
03 Nov 2024
The Quest of Finding the Antidote to Sparse Double Descent
The Quest of Finding the Antidote to Sparse Double Descent
Victor Quétu
Marta Milovanović
34
0
0
31 Aug 2023
HyperSparse Neural Networks: Shifting Exploration to Exploitation
  through Adaptive Regularization
HyperSparse Neural Networks: Shifting Exploration to Exploitation through Adaptive Regularization
Patrick Glandorf
Timo Kaiser
Bodo Rosenhahn
44
5
0
14 Aug 2023
Partial Label Supervision for Agnostic Generative Noisy Label Learning
Partial Label Supervision for Agnostic Generative Noisy Label Learning
Fengbei Liu
Chong Wang
Yuanhong Chen
Yuyuan Liu
G. Carneiro
NoLa
32
1
0
02 Aug 2023
Compensation Learning in Semantic Segmentation
Compensation Learning in Semantic Segmentation
Timo Kaiser
Christoph Reinders
Bodo Rosenhahn
NoLa
40
3
0
26 Apr 2023
PASS: Peer-Agreement based Sample Selection for training with Noisy
  Labels
PASS: Peer-Agreement based Sample Selection for training with Noisy Labels
Arpit Garg
Cuong C. Nguyen
Rafael Felix
Thanh-Toan Do
G. Carneiro
22
2
0
20 Mar 2023
DSD$^2$: Can We Dodge Sparse Double Descent and Compress the Neural
  Network Worry-Free?
DSD2^22: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?
Victor Quétu
Enzo Tartaglione
32
7
0
02 Mar 2023
1