ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.00066
  4. Cited By
Fraternal Dropout
v1v2v3v4 (latest)

Fraternal Dropout

31 October 2017
Konrad Zolna
Devansh Arpit
Dendi Suhubdy
Yoshua Bengio
ArXiv (abs)PDFHTML

Papers citing "Fraternal Dropout"

32 / 32 papers shown
Title
Enhancing Neural Network Interpretability with Feature-Aligned Sparse
  Autoencoders
Enhancing Neural Network Interpretability with Feature-Aligned Sparse Autoencoders
Luke Marks
Alasdair Paren
David M. Krueger
Fazl Barez
AAML
62
7
0
02 Nov 2024
Layer-wise Regularized Dropout for Neural Language Models
Layer-wise Regularized Dropout for Neural Language Models
Shiwen Ni
Min Yang
Ruifeng Xu
Chengming Li
Xiping Hu
49
0
0
26 Feb 2024
CR-SFP: Learning Consistent Representation for Soft Filter Pruning
CR-SFP: Learning Consistent Representation for Soft Filter Pruning
Jingyang Xiang
Zhuangzhi Chen
Jianbiao Mei
Siqi Li
Jun Chen
Yong-Jin Liu
66
0
0
17 Dec 2023
An Ensemble Approach to Personalized Real Time Predictive Writing for
  Experts
An Ensemble Approach to Personalized Real Time Predictive Writing for Experts
Sourav Prosad
Viswa Datha Polavarapu
Shrutendra Harsola
47
0
0
25 Aug 2023
R-Block: Regularized Block of Dropout for convolutional networks
R-Block: Regularized Block of Dropout for convolutional networks
Liqi Wang
Qiyang Hu
38
0
0
27 Jul 2023
Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language
  Understanding
Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language Understanding
Yunchang Zhu
Liang Pang
Kangxi Wu
Yanyan Lan
Huawei Shen
Xueqi Cheng
AAMLELM
57
2
0
10 Jan 2023
Generative Adversarial Training Can Improve Neural Language Models
Generative Adversarial Training Can Improve Neural Language Models
Sajad Movahedi
A. Shakery
GANAI4CE
77
2
0
02 Nov 2022
Information Geometry of Dropout Training
Information Geometry of Dropout Training
Masanari Kimura
H. Hino
46
2
0
22 Jun 2022
Augmentation-induced Consistency Regularization for Classification
Augmentation-induced Consistency Regularization for Classification
Jianguo Wu
Shijing Si
Jianzong Wang
Jing Xiao
69
2
0
25 May 2022
A Survey on Dropout Methods and Experimental Verification in
  Recommendation
A Survey on Dropout Methods and Experimental Verification in Recommendation
Yongqian Li
Weizhi Ma
C. L. Philip Chen
Hao Fei
Yiqun Liu
Shaoping Ma
Yue Yang
88
11
0
05 Apr 2022
Dependency-based Mixture Language Models
Dependency-based Mixture Language Models
Zhixian Yang
Xiaojun Wan
95
2
0
19 Mar 2022
Preventing posterior collapse in variational autoencoders for text
  generation via decoder regularization
Preventing posterior collapse in variational autoencoders for text generation via decoder regularization
Alban Petit
Caio Corro
DRL
94
3
0
28 Oct 2021
R-Drop: Regularized Dropout for Neural Networks
R-Drop: Regularized Dropout for Neural Networks
Xiaobo Liang
Lijun Wu
Juntao Li
Yue Wang
Qi Meng
Tao Qin
Wei Chen
Hao Fei
Tie-Yan Liu
90
439
0
28 Jun 2021
IAUnet: Global Context-Aware Feature Learning for Person
  Re-Identification
IAUnet: Global Context-Aware Feature Learning for Person Re-Identification
Rui Hou
Bingpeng Ma
Hong Chang
Xinqian Gu
Shiguang Shan
Xilin Chen
CVBM
112
49
0
02 Sep 2020
Exploiting Syntactic Structure for Better Language Modeling: A Syntactic
  Distance Approach
Exploiting Syntactic Structure for Better Language Modeling: A Syntactic Distance Approach
Wenyu Du
Zhouhan Lin
Songlin Yang
Timothy J. O'Donnell
Yoshua Bengio
Yue Zhang
65
13
0
12 May 2020
Regularizing Neural Networks by Stochastically Training Layer Ensembles
Regularizing Neural Networks by Stochastically Training Layer Ensembles
Alex Labach
S. Valaee
FedML
20
2
0
21 Nov 2019
Not Enough Data? Deep Learning to the Rescue!
Not Enough Data? Deep Learning to the Rescue!
Ateret Anaby-Tavor
Boaz Carmeli
Esther Goldbraich
Amir Kantor
George Kour
Segev Shlomov
N. Tepper
Naama Zwerdling
100
371
0
08 Nov 2019
On the Regularization Properties of Structured Dropout
On the Regularization Properties of Structured Dropout
Ambar Pal
Connor Lane
René Vidal
B. Haeffele
134
14
0
30 Oct 2019
Alleviating Sequence Information Loss with Data Overlapping and Prime
  Batch Sizes
Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes
Noémien Kocher
Christian Scuito
Lorenzo Tarantino
Alexandros Lazaridis
Andreas Fischer
C. Musat
37
0
0
18 Sep 2019
Character n-gram Embeddings to Improve RNN Language Models
Character n-gram Embeddings to Improve RNN Language Models
Sho Takase
Jun Suzuki
Masaaki Nagata
61
25
0
13 Jun 2019
Bivariate Beta-LSTM
Bivariate Beta-LSTM
Kyungwoo Song
Joonho Jang
Seung-Jae Shin
Il-Chul Moon
45
6
0
25 May 2019
Gmail Smart Compose: Real-Time Assisted Writing
Gmail Smart Compose: Real-Time Assisted Writing
Mengzhao Chen
Benjamin Lee
G. Bansal
Yuan Cao
Shuyuan Zhang
...
Yinan Wang
Andrew M. Dai
Zhiwen Chen
Timothy Sohn
Yonghui Wu
76
208
0
17 May 2019
Think Again Networks and the Delta Loss
Think Again Networks and the Delta Loss
Alexandre Salle
Marcelo O. R. Prates
35
2
0
26 Apr 2019
Survey of Dropout Methods for Deep Neural Networks
Survey of Dropout Methods for Deep Neural Networks
Alex Labach
Hojjat Salehinejad
S. Valaee
80
150
0
25 Apr 2019
Adversarial Dropout for Recurrent Neural Networks
Adversarial Dropout for Recurrent Neural Networks
Sungrae Park
Kyungwoo Song
Mingi Ji
Wonsung Lee
Il-Chul Moon
32
6
0
22 Apr 2019
Breaking the Softmax Bottleneck via Learnable Monotonic Pointwise
  Non-linearities
Breaking the Softmax Bottleneck via Learnable Monotonic Pointwise Non-linearities
O. Ganea
Sylvain Gelly
Gary Bécigneul
Aliaksei Severyn
72
18
0
21 Feb 2019
Co-regularized Alignment for Unsupervised Domain Adaptation
Co-regularized Alignment for Unsupervised Domain Adaptation
Abhishek Kumar
P. Sattigeri
Kahini Wadhawan
Leonid Karlinsky
Rogerio Feris
William T. Freeman
G. Wornell
OOD
66
158
0
13 Nov 2018
Evolutionary Stochastic Gradient Descent for Optimization of Deep Neural
  Networks
Evolutionary Stochastic Gradient Descent for Optimization of Deep Neural Networks
Xiaodong Cui
Wei Zhang
Zoltán Tüske
M. Picheny
ODL
90
91
0
16 Oct 2018
Dropout as a Structured Shrinkage Prior
Dropout as a Structured Shrinkage Prior
Eric T. Nalisnick
José Miguel Hernández-Lobato
Padhraic Smyth
BDLUQCV
33
1
0
09 Oct 2018
Direct Output Connection for a High-Rank Language Model
Direct Output Connection for a High-Rank Language Model
Sho Takase
Jun Suzuki
Masaaki Nagata
99
36
0
30 Aug 2018
Recent Advances in Deep Learning: An Overview
Recent Advances in Deep Learning: An Overview
Matiur Rahman Minar
Jibon Naher
VLM
104
117
0
21 Jul 2018
Pushing the bounds of dropout
Pushing the bounds of dropout
Gábor Melis
Charles Blundell
Tomás Kociský
Karl Moritz Hermann
Chris Dyer
Phil Blunsom
51
13
0
23 May 2018
1