ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.09737
  4. Cited By
Filter Response Normalization Layer: Eliminating Batch Dependence in the
  Training of Deep Neural Networks

Filter Response Normalization Layer: Eliminating Batch Dependence in the Training of Deep Neural Networks

21 November 2019
Saurabh Singh
Shankar Krishnan
    UQCV
ArXivPDFHTML

Papers citing "Filter Response Normalization Layer: Eliminating Batch Dependence in the Training of Deep Neural Networks"

13 / 13 papers shown
Title
Dynamic Gradient Sparse Update for Edge Training
Dynamic Gradient Sparse Update for Edge Training
I-Hsuan Li
Tian-Sheuan Chang
68
1
0
23 Mar 2025
Unconditional stability of a recurrent neural circuit implementing divisive normalization
Unconditional stability of a recurrent neural circuit implementing divisive normalization
Shivang Rawat
David J. Heeger
Stefano Martiniani
29
0
0
27 Sep 2024
Bayesian imaging inverse problem with SA-Roundtrip prior via HMC-pCN
  sampler
Bayesian imaging inverse problem with SA-Roundtrip prior via HMC-pCN sampler
Jiayu Qian
Yuanyuan Liu
Jingya Yang
Qingping Zhou
23
0
0
24 Oct 2023
DGFont++: Robust Deformable Generative Networks for Unsupervised Font
  Generation
DGFont++: Robust Deformable Generative Networks for Unsupervised Font Generation
Xinyuan Chen
Yangchen Xie
Li Sun
Yue Lu
30
4
0
30 Dec 2022
On the Pitfalls of Batch Normalization for End-to-End Video Learning: A
  Study on Surgical Workflow Analysis
On the Pitfalls of Batch Normalization for End-to-End Video Learning: A Study on Surgical Workflow Analysis
Dominik Rivoir
Isabel Funke
Stefanie Speidel
24
17
0
15 Mar 2022
Benchmarking of DL Libraries and Models on Mobile Devices
Benchmarking of DL Libraries and Models on Mobile Devices
Qiyang Zhang
Xiang Li
Xiangying Che
Xiao Ma
Ao Zhou
Mengwei Xu
Shangguang Wang
Yun Ma
Xuanzhe Liu
25
48
0
14 Feb 2022
TS-Net: OCR Trained to Switch Between Text Transcription Styles
TS-Net: OCR Trained to Switch Between Text Transcription Styles
Janina Kohut
Michal Hradiš
AI4TS
27
9
0
09 Mar 2021
Predictive Information Accelerates Learning in RL
Predictive Information Accelerates Learning in RL
Kuang-Huei Lee
Ian S. Fischer
Anthony Z. Liu
Yijie Guo
Honglak Lee
John F. Canny
S. Guadarrama
23
72
0
24 Jul 2020
Deep Isometric Learning for Visual Recognition
Deep Isometric Learning for Visual Recognition
Haozhi Qi
Chong You
Xueliang Wang
Yi Ma
Jitendra Malik
VLM
35
53
0
30 Jun 2020
YOLOv4: Optimal Speed and Accuracy of Object Detection
YOLOv4: Optimal Speed and Accuracy of Object Detection
Alexey Bochkovskiy
Chien-Yao Wang
H. Liao
VLM
ObjD
63
12,055
0
23 Apr 2020
Evolving Normalization-Activation Layers
Evolving Normalization-Activation Layers
Hanxiao Liu
Andrew Brock
Karen Simonyan
Quoc V. Le
17
79
0
06 Apr 2020
Pipelined Backpropagation at Scale: Training Large Models without
  Batches
Pipelined Backpropagation at Scale: Training Large Models without Batches
Atli Kosson
Vitaliy Chiley
Abhinav Venigalla
Joel Hestness
Urs Koster
35
33
0
25 Mar 2020
Batch Normalization Biases Residual Blocks Towards the Identity Function
  in Deep Networks
Batch Normalization Biases Residual Blocks Towards the Identity Function in Deep Networks
Soham De
Samuel L. Smith
ODL
27
20
0
24 Feb 2020
1