ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.00388
  4. Cited By
DiracNets: Training Very Deep Neural Networks Without Skip-Connections

DiracNets: Training Very Deep Neural Networks Without Skip-Connections

1 June 2017
Sergey Zagoruyko
N. Komodakis
    UQCV
    OOD
ArXivPDFHTML

Papers citing "DiracNets: Training Very Deep Neural Networks Without Skip-Connections"

22 / 22 papers shown
Title
Enhancing Monotonic Modeling with Spatio-Temporal Adaptive Awareness in Diverse Marketing
Enhancing Monotonic Modeling with Spatio-Temporal Adaptive Awareness in Diverse Marketing
Bin Li
Jiayan Pei
Feiyang Xiao
Yifan Zhao
Zhixing Zhang
Diwei Liu
Hengxu He
Jia Jia
29
0
0
20 Jun 2024
Adaptive Discovering and Merging for Incremental Novel Class Discovery
Adaptive Discovering and Merging for Incremental Novel Class Discovery
Guangyao Chen
Peixi Peng
Yangru Huang
Mengyue Geng
Yonghong Tian
CLL
MoMe
37
9
0
06 Mar 2024
RepQ: Generalizing Quantization-Aware Training for Re-Parametrized
  Architectures
RepQ: Generalizing Quantization-Aware Training for Re-Parametrized Architectures
Anastasiia Prutianova
Alexey Zaytsev
Chung-Kuei Lee
Fengyu Sun
Ivan Koryakovskiy
MQ
21
0
0
09 Nov 2023
Multi-Path Transformer is Better: A Case Study on Neural Machine
  Translation
Multi-Path Transformer is Better: A Case Study on Neural Machine Translation
Ye Lin
Shuhan Zhou
Yanyang Li
Anxiang Ma
Tong Xiao
Jingbo Zhu
38
0
0
10 May 2023
RIFormer: Keep Your Vision Backbone Effective While Removing Token Mixer
RIFormer: Keep Your Vision Backbone Effective While Removing Token Mixer
Jiahao Wang
Songyang Zhang
Yong Liu
Taiqiang Wu
Yujiu Yang
Xihui Liu
Kai-xiang Chen
Ping Luo
Dahua Lin
42
20
0
12 Apr 2023
InceptionNeXt: When Inception Meets ConvNeXt
InceptionNeXt: When Inception Meets ConvNeXt
Weihao Yu
Pan Zhou
Shuicheng Yan
Xinchao Wang
48
119
0
29 Mar 2023
QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster
  Inference on Mobile Platforms
QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms
Guillaume Berger
Manik Dhingra
Antoine Mercier
Yash Savani
Sunny Panchal
Fatih Porikli
SupR
20
5
0
08 Mar 2023
Tailor: Altering Skip Connections for Resource-Efficient Inference
Tailor: Altering Skip Connections for Resource-Efficient Inference
Olivia Weng
Gabriel Marcano
Vladimir Loncar
Alireza Khodamoradi
Nojan Sheybani
Andres Meza
F. Koushanfar
K. Denolf
Javier Mauricio Duarte
Ryan Kastner
46
12
0
18 Jan 2023
RepMode: Learning to Re-parameterize Diverse Experts for Subcellular
  Structure Prediction
RepMode: Learning to Re-parameterize Diverse Experts for Subcellular Structure Prediction
Donghao Zhou
Chunbin Gu
Junde Xu
Furui Liu
Qiong Wang
Guangyong Chen
Pheng-Ann Heng
MoE
18
4
0
20 Dec 2022
Make RepVGG Greater Again: A Quantization-aware Approach
Make RepVGG Greater Again: A Quantization-aware Approach
Xiangxiang Chu
Liang Li
Bo Zhang
MQ
48
46
0
03 Dec 2022
AugOp: Inject Transformation into Neural Operator
Longqing Ye
ViT
30
0
0
23 Nov 2022
RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization
RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization
Chengpeng Chen
Zichao Guo
Haien Zeng
Pengfei Xiong
Jian Dong
26
38
0
11 Nov 2022
Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs,
  Mobile AI & AIM 2022 challenge: Report
Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 challenge: Report
Andrey D. Ignatov
Radu Timofte
Maurizio Denna
Abdelbadie Younes
Ganzorig Gankhuyag
...
Jing Liu
Garas Gendy
Nabil Sabor
J. Hou
Guanghui He
SupR
MQ
26
32
0
07 Nov 2022
What Makes Convolutional Models Great on Long Sequence Modeling?
What Makes Convolutional Models Great on Long Sequence Modeling?
Yuhong Li
Tianle Cai
Yi Zhang
De-huai Chen
Debadeepta Dey
VLM
39
96
0
17 Oct 2022
Scaling & Shifting Your Features: A New Baseline for Efficient Model
  Tuning
Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning
Dongze Lian
Daquan Zhou
Jiashi Feng
Xinchao Wang
36
249
0
17 Oct 2022
RepSR: Training Efficient VGG-style Super-Resolution Networks with
  Structural Re-Parameterization and Batch Normalization
RepSR: Training Efficient VGG-style Super-Resolution Networks with Structural Re-Parameterization and Batch Normalization
Xintao Wang
Chao Dong
Ying Shan
29
48
0
11 May 2022
RMNet: Equivalently Removing Residual Connection from Networks
RMNet: Equivalently Removing Residual Connection from Networks
Fanxu Meng
Hao Cheng
Jia-Xin Zhuang
Ke Li
Xing Sun
31
11
0
01 Nov 2021
A Convergence Theory Towards Practical Over-parameterized Deep Neural
  Networks
A Convergence Theory Towards Practical Over-parameterized Deep Neural Networks
Asaf Noy
Yi Tian Xu
Y. Aflalo
Lihi Zelnik-Manor
Rong Jin
41
3
0
12 Jan 2021
RepVGG: Making VGG-style ConvNets Great Again
RepVGG: Making VGG-style ConvNets Great Again
Xiaohan Ding
Xinming Zhang
Ningning Ma
Jungong Han
Guiguang Ding
Jian Sun
136
1,550
0
11 Jan 2021
Deep Isometric Learning for Visual Recognition
Deep Isometric Learning for Visual Recognition
Haozhi Qi
Chong You
Xueliang Wang
Yi Ma
Jitendra Malik
VLM
35
54
0
30 Jun 2020
Single Image Super-Resolution via Cascaded Multi-Scale Cross Network
Single Image Super-Resolution via Cascaded Multi-Scale Cross Network
Yanting Hu
Xinbo Gao
Jie Li
Yuanfei Huang
Hanzi Wang
SupR
30
52
0
24 Feb 2018
The exploding gradient problem demystified - definition, prevalence,
  impact, origin, tradeoffs, and solutions
The exploding gradient problem demystified - definition, prevalence, impact, origin, tradeoffs, and solutions
George Philipp
D. Song
J. Carbonell
ODL
35
46
0
15 Dec 2017
1