ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.07146
  4. Cited By
Wide Residual Networks
v1v2v3v4 (latest)

Wide Residual Networks

23 May 2016
Sergey Zagoruyko
N. Komodakis
ArXiv (abs)PDFHTMLGithub (1306★)

Papers citing "Wide Residual Networks"

50 / 4,147 papers shown
Title
Cars Can't Fly up in the Sky: Improving Urban-Scene Segmentation via
  Height-driven Attention Networks
Cars Can't Fly up in the Sky: Improving Urban-Scene Segmentation via Height-driven Attention Networks
Sungha Choi
J. Kim
Jaegul Choo
SSeg
197
156
0
11 Mar 2020
Using an ensemble color space model to tackle adversarial examples
Using an ensemble color space model to tackle adversarial examples
Shreyank N. Gowda
C. Yuan
AAML
30
1
0
10 Mar 2020
Diversity inducing Information Bottleneck in Model Ensembles
Diversity inducing Information Bottleneck in Model Ensembles
Samarth Sinha
Homanga Bharadhwaj
Anirudh Goyal
Hugo Larochelle
Animesh Garg
Florian Shkurti
BDLUQCV
67
40
0
10 Mar 2020
Knowledge distillation via adaptive instance normalization
Knowledge distillation via adaptive instance normalization
Jing Yang
Brais Martínez
Adrian Bulat
Georgios Tzimiropoulos
63
24
0
09 Mar 2020
Embedding Propagation: Smoother Manifold for Few-Shot Classification
Embedding Propagation: Smoother Manifold for Few-Shot Classification
Pau Rodríguez
I. Laradji
Alexandre Drouin
Alexandre Lacoste
73
196
0
09 Mar 2020
Pacemaker: Intermediate Teacher Knowledge Distillation For On-The-Fly
  Convolutional Neural Network
Pacemaker: Intermediate Teacher Knowledge Distillation For On-The-Fly Convolutional Neural Network
Wonchul Son
Youngbin Kim
Wonseok Song
Youngsuk Moon
Wonjun Hwang
23
0
0
09 Mar 2020
$Π-$nets: Deep Polynomial Neural Networks
Π−Π-Π−nets: Deep Polynomial Neural Networks
Grigorios G. Chrysos
Stylianos Moschoglou
Giorgos Bouritsas
Yannis Panagakis
Jiankang Deng
Stefanos Zafeiriou
85
61
0
08 Mar 2020
DADA: Differentiable Automatic Data Augmentation
DADA: Differentiable Automatic Data Augmentation
Yonggang Li
Guosheng Hu
Yongtao Wang
Timothy M. Hospedales
N. Robertson
Yongxin Yang
92
110
0
08 Mar 2020
Sampled Training and Node Inheritance for Fast Evolutionary Neural
  Architecture Search
Sampled Training and Node Inheritance for Fast Evolutionary Neural Architecture Search
Haoyu Zhang
Yaochu Jin
Ran Cheng
K. Hao
67
9
0
07 Mar 2020
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Bai Li
Shiqi Wang
Yunhan Jia
Yantao Lu
Zhenyu Zhong
Lawrence Carin
Suman Jana
AAML
142
14
0
06 Mar 2020
Adversarial Vertex Mixup: Toward Better Adversarially Robust
  Generalization
Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization
Saehyung Lee
Hyungyu Lee
Sungroh Yoon
AAML
252
119
0
05 Mar 2020
A Closer Look at Accuracy vs. Robustness
A Closer Look at Accuracy vs. Robustness
Yao-Yuan Yang
Cyrus Rashtchian
Hongyang R. Zhang
Ruslan Salakhutdinov
Kamalika Chaudhuri
OOD
145
26
0
05 Mar 2020
The large learning rate phase of deep learning: the catapult mechanism
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
224
241
0
04 Mar 2020
Deep Learning in Memristive Nanowire Networks
Deep Learning in Memristive Nanowire Networks
Jack D. Kendall
Ross D. Pantone
J. Nino
21
2
0
03 Mar 2020
BATS: Binary ArchitecTure Search
BATS: Binary ArchitecTure Search
Adrian Bulat
Brais Martínez
Georgios Tzimiropoulos
MQ
97
68
0
03 Mar 2020
Curriculum By Smoothing
Curriculum By Smoothing
Samarth Sinha
Animesh Garg
Hugo Larochelle
105
7
0
03 Mar 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
  Adversarial Robustness
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OODAAML
129
67
0
02 Mar 2020
Flashlight CNN Image Denoising
Flashlight CNN Image Denoising
Pham Huu Thanh Binh
Cristóvão Cruz
K. Egiazarian
25
6
0
02 Mar 2020
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random
  Features in CNNs
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
Jonathan Frankle
D. Schwab
Ari S. Morcos
115
143
0
29 Feb 2020
Automatic Perturbation Analysis for Scalable Certified Robustness and
  Beyond
Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond
Kaidi Xu
Zhouxing Shi
Huan Zhang
Yihan Wang
Kai-Wei Chang
Minlie Huang
B. Kailkhura
Xinyu Lin
Cho-Jui Hsieh
AAML
64
12
0
28 Feb 2020
An Efficient Method of Training Small Models for Regression Problems
  with Knowledge Distillation
An Efficient Method of Training Small Models for Regression Problems with Knowledge Distillation
M. Takamoto
Yusuke Morishita
Hitoshi Imaoka
57
33
0
28 Feb 2020
Utilizing Network Properties to Detect Erroneous Inputs
Utilizing Network Properties to Detect Erroneous Inputs
Matt Gorbett
Nathaniel Blanchard
AAML
67
6
0
28 Feb 2020
Learning Representations by Predicting Bags of Visual Words
Learning Representations by Predicting Bags of Visual Words
Spyros Gidaris
Andrei Bursuc
N. Komodakis
P. Pérez
Matthieu Cord
SSL
116
118
0
27 Feb 2020
Overfitting in adversarially robust deep learning
Overfitting in adversarially robust deep learning
Leslie Rice
Eric Wong
Zico Kolter
165
811
0
26 Feb 2020
Randomization matters. How to defend against strong adversarial attacks
Randomization matters. How to defend against strong adversarial attacks
Rafael Pinot
Raphael Ettedgui
Geovani Rizk
Y. Chevaleyre
Jamal Atif
AAML
130
60
0
26 Feb 2020
Generalized ODIN: Detecting Out-of-distribution Image without Learning
  from Out-of-distribution Data
Generalized ODIN: Detecting Out-of-distribution Image without Learning from Out-of-distribution Data
Yen-Chang Hsu
Yilin Shen
Hongxia Jin
Z. Kira
OODD
157
579
0
26 Feb 2020
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Jingfeng Zhang
Xilie Xu
Bo Han
Gang Niu
Li-zhen Cui
Masashi Sugiyama
Mohan S. Kankanhalli
AAML
62
406
0
26 Feb 2020
On Feature Normalization and Data Augmentation
On Feature Normalization and Data Augmentation
Boyi Li
Felix Wu
Ser-Nam Lim
Serge J. Belongie
Kilian Q. Weinberger
56
137
0
25 Feb 2020
Generalizing Convolutional Neural Networks for Equivariance to Lie
  Groups on Arbitrary Continuous Data
Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data
Marc Finzi
Samuel Stanton
Pavel Izmailov
A. Wilson
138
324
0
25 Feb 2020
Layer-wise Conditioning Analysis in Exploring the Learning Dynamics of
  DNNs
Layer-wise Conditioning Analysis in Exploring the Learning Dynamics of DNNs
Lei Huang
Jie Qin
Li Liu
Fan Zhu
Ling Shao
AI4CE
86
11
0
25 Feb 2020
Learning Queuing Networks by Recurrent Neural Networks
Learning Queuing Networks by Recurrent Neural Networks
G. Garbi
Emilio Incerto
M. Tribastone
10
16
0
25 Feb 2020
Understanding and Mitigating the Tradeoff Between Robustness and
  Accuracy
Understanding and Mitigating the Tradeoff Between Robustness and Accuracy
Aditi Raghunathan
Sang Michael Xie
Fanny Yang
John C. Duchi
Percy Liang
AAML
104
229
0
25 Feb 2020
I Am Going MAD: Maximum Discrepancy Competition for Comparing
  Classifiers Adaptively
I Am Going MAD: Maximum Discrepancy Competition for Comparing Classifiers Adaptively
Haotao Wang
Tianlong Chen
Zhangyang Wang
Kede Ma
VLM
68
20
0
25 Feb 2020
HYDRA: Pruning Adversarially Robust Neural Networks
HYDRA: Pruning Adversarially Robust Neural Networks
Vikash Sehwag
Shiqi Wang
Prateek Mittal
Suman Jana
AAML
69
25
0
24 Feb 2020
Batch Normalization Biases Residual Blocks Towards the Identity Function
  in Deep Networks
Batch Normalization Biases Residual Blocks Towards the Identity Function in Deep Networks
Soham De
Samuel L. Smith
ODL
106
20
0
24 Feb 2020
The Early Phase of Neural Network Training
The Early Phase of Neural Network Training
Jonathan Frankle
D. Schwab
Ari S. Morcos
98
174
0
24 Feb 2020
Self-Adaptive Training: beyond Empirical Risk Minimization
Self-Adaptive Training: beyond Empirical Risk Minimization
Lang Huang
Chaoning Zhang
Hongyang R. Zhang
NoLa
97
205
0
24 Feb 2020
Weighting Is Worth the Wait: Bayesian Optimization with Importance
  Sampling
Weighting Is Worth the Wait: Bayesian Optimization with Importance Sampling
Setareh Ariafar
Zelda E. Mariet
Ehsan Elhamifar
Dana Brooks
Jennifer Dy
Jasper Snoek
58
3
0
23 Feb 2020
SNIFF: Reverse Engineering of Neural Networks with Fault Attacks
SNIFF: Reverse Engineering of Neural Networks with Fault Attacks
J. Breier
Dirmanto Jap
Xiaolu Hou
S. Bhasin
Yang Liu
75
53
0
23 Feb 2020
Random Bundle: Brain Metastases Segmentation Ensembling through
  Annotation Randomization
Random Bundle: Brain Metastases Segmentation Ensembling through Annotation Randomization
Darvin Yi
E. Grøvik
Michael Iv
E. Tong
Greg Zaharchuk
D. Rubin
48
2
0
23 Feb 2020
VFlow: More Expressive Generative Flows with Variational Data
  Augmentation
VFlow: More Expressive Generative Flows with Variational Data Augmentation
Jianfei Chen
Cheng Lu
Biqi Chenli
Jun Zhu
Tian Tian
DRL
90
63
0
22 Feb 2020
Towards Robust and Reproducible Active Learning Using Neural Networks
Towards Robust and Reproducible Active Learning Using Neural Networks
Prateek Munjal
Nasir Hayat
Munawar Hayat
J. Sourati
Shadab Khan
UQCV
84
69
0
21 Feb 2020
Calibrating Deep Neural Networks using Focal Loss
Calibrating Deep Neural Networks using Focal Loss
Jishnu Mukhoti
Viveka Kulharia
Amartya Sanyal
Stuart Golodetz
Philip Torr
P. Dokania
UQCV
98
468
0
21 Feb 2020
Greedy Policy Search: A Simple Baseline for Learnable Test-Time
  Augmentation
Greedy Policy Search: A Simple Baseline for Learnable Test-Time Augmentation
Dmitry Molchanov
Alexander Lyzhov
Yuliya Molchanova
Arsenii Ashukha
Dmitry Vetrov
TPM
112
85
0
21 Feb 2020
Parallel and distributed asynchronous adaptive stochastic gradient
  methods
Parallel and distributed asynchronous adaptive stochastic gradient methods
Yangyang Xu
Yibo Xu
Yonggui Yan
Colin Sutcher-Shepard
Leopold Grinberg
Jiewei Chen
42
2
0
21 Feb 2020
MaxUp: A Simple Way to Improve Generalization of Neural Network Training
MaxUp: A Simple Way to Improve Generalization of Neural Network Training
Chengyue Gong
Zhaolin Ren
Mao Ye
Qiang Liu
AAML
77
56
0
20 Feb 2020
A survey on Semi-, Self- and Unsupervised Learning for Image
  Classification
A survey on Semi-, Self- and Unsupervised Learning for Image Classification
Lars Schmarje
M. Santarossa
Simon-Martin Schroder
Reinhard Koch
SSLVLM
98
165
0
20 Feb 2020
Boosting Adversarial Training with Hypersphere Embedding
Boosting Adversarial Training with Hypersphere Embedding
Tianyu Pang
Xiao Yang
Yinpeng Dong
Kun Xu
Jun Zhu
Hang Su
AAML
89
156
0
20 Feb 2020
Deep regularization and direct training of the inner layers of Neural
  Networks with Kernel Flows
Deep regularization and direct training of the inner layers of Neural Networks with Kernel Flows
G. Yoo
H. Owhadi
74
21
0
19 Feb 2020
Dissecting Neural ODEs
Dissecting Neural ODEs
Stefano Massaroli
Michael Poli
Jinkyoo Park
Atsushi Yamashita
Hajime Asama
133
204
0
19 Feb 2020
Previous
123...626364...818283
Next