ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.10241
  4. Cited By
BYOL works even without batch statistics

BYOL works even without batch statistics

20 October 2020
Pierre Harvey Richemond
Jean-Bastien Grill
Florent Altché
Corentin Tallec
Florian Strub
Andrew Brock
Samuel L. Smith
Soham De
Razvan Pascanu
Bilal Piot
Michal Valko
    SSL
ArXivPDFHTML

Papers citing "BYOL works even without batch statistics"

28 / 28 papers shown
Title
No Other Representation Component Is Needed: Diffusion Transformers Can Provide Representation Guidance by Themselves
No Other Representation Component Is Needed: Diffusion Transformers Can Provide Representation Guidance by Themselves
D. Jiang
Mengmeng Wang
Liuzhuozheng Li
Lei Zhang
Haoyu Wang
Wei Wei
Guang Dai
Yanning Zhang
Jingdong Wang
DiffM
51
0
0
05 May 2025
Analysis of Spatial augmentation in Self-supervised models in the
  purview of training and test distributions
Analysis of Spatial augmentation in Self-supervised models in the purview of training and test distributions
Abhishek Jha
Tinne Tuytelaars
28
0
0
26 Sep 2024
Enhancing Weakly-Supervised Histopathology Image Segmentation with
  Knowledge Distillation on MIL-Based Pseudo-Labels
Enhancing Weakly-Supervised Histopathology Image Segmentation with Knowledge Distillation on MIL-Based Pseudo-Labels
Yinsheng He
Xingyu Li
Roger J. Zemp
VLM
33
0
0
14 Jul 2024
DimCL: Dimensional Contrastive Learning For Improving Self-Supervised
  Learning
DimCL: Dimensional Contrastive Learning For Improving Self-Supervised Learning
Thanh Nguyen
T. Pham
Chaoning Zhang
T. Luu
Thang Vu
Chang-Dong Yoo
23
9
0
21 Sep 2023
The Edge of Orthogonality: A Simple View of What Makes BYOL Tick
The Edge of Orthogonality: A Simple View of What Makes BYOL Tick
Pierre Harvey Richemond
Allison C. Tam
Yunhao Tang
Florian Strub
Bilal Piot
Felix Hill
SSL
26
9
0
09 Feb 2023
Learning the Relation between Similarity Loss and Clustering Loss in
  Self-Supervised Learning
Learning the Relation between Similarity Loss and Clustering Loss in Self-Supervised Learning
Jidong Ge
YuXiang Liu
Jie Gui
Lanting Fang
Ming Lin
James T. Kwok
LiGuo Huang
B. Luo
SSL
12
5
0
08 Jan 2023
Where Should I Spend My FLOPS? Efficiency Evaluations of Visual
  Pre-training Methods
Where Should I Spend My FLOPS? Efficiency Evaluations of Visual Pre-training Methods
Skanda Koppula
Yazhe Li
Evan Shelhamer
Andrew Jaegle
Nikhil Parthasarathy
Relja Arandjelović
João Carreira
Olivier J. Hénaff
28
9
0
30 Sep 2022
The Geometry of Self-supervised Learning Models and its Impact on
  Transfer Learning
The Geometry of Self-supervised Learning Models and its Impact on Transfer Learning
Romain Cosentino
Sarath Shekkizhar
Mahdi Soltanolkotabi
A. Avestimehr
Antonio Ortega
SSL
46
7
0
18 Sep 2022
Transfer Learning for Segmentation Problems: Choose the Right Encoder
  and Skip the Decoder
Transfer Learning for Segmentation Problems: Choose the Right Encoder and Skip the Decoder
Jonas Dippel
Matthias Lenga
Thomas Goerttler
Klaus Obermayer
Johannes Höhne
SSL
13
2
0
29 Jul 2022
Self-supervised learning with rotation-invariant kernels
Self-supervised learning with rotation-invariant kernels
Léon Zheng
Gilles Puy
E. Riccietti
Patrick Pérez
Rémi Gribonval
SSL
9
2
0
28 Jul 2022
Is one annotation enough? A data-centric image classification benchmark
  for noisy and ambiguous label estimation
Is one annotation enough? A data-centric image classification benchmark for noisy and ambiguous label estimation
Lars Schmarje
Vasco Grossmann
Claudius Zelenka
S. Dippel
R. Kiko
...
M. Pastell
J. Stracke
A. Valros
N. Volkmann
Reinahrd Koch
35
34
0
13 Jul 2022
BYOL-Explore: Exploration by Bootstrapped Prediction
BYOL-Explore: Exploration by Bootstrapped Prediction
Z. Guo
S. Thakoor
Miruna Pislar
Bernardo Avila-Pires
Florent Altché
...
Yunhao Tang
Michal Valko
Rémi Munos
M. G. Azar
Bilal Piot
22
67
0
16 Jun 2022
Extreme Masking for Learning Instance and Distributed Visual
  Representations
Extreme Masking for Learning Instance and Distributed Visual Representations
Zhirong Wu
Zihang Lai
Xiao Sun
Stephen Lin
30
22
0
09 Jun 2022
The Mechanism of Prediction Head in Non-contrastive Self-supervised
  Learning
The Mechanism of Prediction Head in Non-contrastive Self-supervised Learning
Zixin Wen
Yuanzhi Li
SSL
24
34
0
12 May 2022
Self-Supervised Learning for Invariant Representations from
  Multi-Spectral and SAR Images
Self-Supervised Learning for Invariant Representations from Multi-Spectral and SAR Images
P. Jain
Bianca Schoen-Phelan
R. Ross
16
32
0
04 May 2022
Self-Labeling Refinement for Robust Representation Learning with
  Bootstrap Your Own Latent
Self-Labeling Refinement for Robust Representation Learning with Bootstrap Your Own Latent
Siddhant Garg
Dhruval Jain
SSL
29
0
0
09 Apr 2022
Audio Self-supervised Learning: A Survey
Audio Self-supervised Learning: A Survey
Shuo Liu
Adria Mallol-Ragolta
Emilia Parada-Cabeleiro
Kun Qian
Xingshuo Jing
Alexander Kathan
Bin Hu
Bjoern W. Schuller
SSL
29
106
0
02 Mar 2022
VICReg: Variance-Invariance-Covariance Regularization for
  Self-Supervised Learning
VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning
Adrien Bardes
Jean Ponce
Yann LeCun
SSL
DML
39
900
0
11 May 2021
On Feature Decorrelation in Self-Supervised Learning
On Feature Decorrelation in Self-Supervised Learning
Tianyu Hua
Wenxiao Wang
Zihui Xue
Sucheng Ren
Yue Wang
Hang Zhao
SSL
OOD
117
187
0
02 May 2021
A Large-Scale Study on Unsupervised Spatiotemporal Representation
  Learning
A Large-Scale Study on Unsupervised Spatiotemporal Representation Learning
Christoph Feichtenhofer
Haoqi Fan
Bo Xiong
Ross B. Girshick
Kaiming He
SSL
AI4TS
23
257
0
29 Apr 2021
Broaden Your Views for Self-Supervised Video Learning
Broaden Your Views for Self-Supervised Video Learning
Adrià Recasens
Pauline Luc
Jean-Baptiste Alayrac
Luyu Wang
Ross Hemsley
...
Florent Altché
M. Valko
Jean-Bastien Grill
Aaron van den Oord
Andrew Zisserman
SSL
AI4TS
23
127
0
30 Mar 2021
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
Jure Zbontar
Li Jing
Ishan Misra
Yann LeCun
Stéphane Deny
SSL
13
2,288
0
04 Mar 2021
Mine Your Own vieW: Self-Supervised Learning Through Across-Sample
  Prediction
Mine Your Own vieW: Self-Supervised Learning Through Across-Sample Prediction
Mehdi Azabou
M. G. Azar
Ran Liu
Chi-Heng Lin
Erik C. Johnson
...
Lindsey Kitchell
Keith B. Hengen
William R. Gray Roncal
Michal Valko
Eva L. Dyer
AI4TS
17
56
0
19 Feb 2021
Momentum^2 Teacher: Momentum Teacher with Momentum Statistics for
  Self-Supervised Learning
Momentum^2 Teacher: Momentum Teacher with Momentum Statistics for Self-Supervised Learning
Zeming Li
Songtao Liu
Jian-jun Sun
40
16
0
19 Jan 2021
Understanding Self-supervised Learning with Dual Deep Networks
Understanding Self-supervised Learning with Dual Deep Networks
Yuandong Tian
Lantao Yu
Xinlei Chen
Surya Ganguli
SSL
13
78
0
01 Oct 2020
Improved Baselines with Momentum Contrastive Learning
Improved Baselines with Momentum Contrastive Learning
Xinlei Chen
Haoqi Fan
Ross B. Girshick
Kaiming He
SSL
264
3,369
0
09 Mar 2020
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train
  10,000-Layer Vanilla Convolutional Neural Networks
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao
Yasaman Bahri
Jascha Narain Sohl-Dickstein
S. Schoenholz
Jeffrey Pennington
220
348
0
14 Jun 2018
Mean teachers are better role models: Weight-averaged consistency
  targets improve semi-supervised deep learning results
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
Antti Tarvainen
Harri Valpola
OOD
MoMe
246
1,275
0
06 Mar 2017
1