ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.07970
  4. Cited By
Training Deep Neural Networks Without Batch Normalization

Training Deep Neural Networks Without Batch Normalization

18 August 2020
D. Gaur
Joachim Folz
Andreas Dengel
    ODL
ArXivPDFHTML

Papers citing "Training Deep Neural Networks Without Batch Normalization"

2 / 2 papers shown
Title
TinyCL: An Efficient Hardware Architecture for Continual Learning on Autonomous Systems
TinyCL: An Efficient Hardware Architecture for Continual Learning on Autonomous Systems
Eugenio Ressa
Alberto Marchisio
Maurizio Martina
Guido Masera
Mohamed Bennai
48
0
0
15 Feb 2024
On Feature Decorrelation in Self-Supervised Learning
On Feature Decorrelation in Self-Supervised Learning
Tianyu Hua
Wenxiao Wang
Zihui Xue
Sucheng Ren
Yue Wang
Hang Zhao
SSL
OOD
140
188
0
02 May 2021
1