ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.06484
  4. Cited By
Generalizing and Decoupling Neural Collapse via Hyperspherical
  Uniformity Gap

Generalizing and Decoupling Neural Collapse via Hyperspherical Uniformity Gap

11 March 2023
Weiyang Liu
L. Yu
Adrian Weller
Bernhard Schölkopf
ArXivPDFHTML

Papers citing "Generalizing and Decoupling Neural Collapse via Hyperspherical Uniformity Gap"

21 / 21 papers shown
Title
Are All Losses Created Equal: A Neural Collapse Perspective
Are All Losses Created Equal: A Neural Collapse Perspective
Jinxin Zhou
Chong You
Xiao Li
Kangning Liu
Sheng Liu
Qing Qu
Zhihui Zhu
59
63
0
04 Oct 2022
AdaFace: Quality Adaptive Margin for Face Recognition
AdaFace: Quality Adaptive Margin for Face Recognition
Minchul Kim
Anil K. Jain
Xiaoming Liu
CVBM
103
405
0
03 Apr 2022
MagFace: A Universal Representation for Face Recognition and Quality
  Assessment
MagFace: A Universal Representation for Face Recognition and Quality Assessment
Qiang Meng
Shichao Zhao
Zhida Huang
F. Zhou
CVBM
66
487
0
11 Mar 2021
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
Jure Zbontar
Li Jing
Ishan Misra
Yann LeCun
Stéphane Deny
SSL
249
2,321
0
04 Mar 2021
Exploring Simple Siamese Representation Learning
Exploring Simple Siamese Representation Learning
Xinlei Chen
Kaiming He
SSL
213
4,017
0
20 Nov 2020
Partial FC: Training 10 Million Identities on a Single Machine
Partial FC: Training 10 Million Identities on a Single Machine
Xiang An
Xuhan Zhu
Yanghua Xiao
Lan Wu
Ming Zhang
Yuan Gao
Bin Qin
Debing Zhang
Ying Fu
CVBM
93
216
0
11 Oct 2020
Prevalence of Neural Collapse during the terminal phase of deep learning
  training
Prevalence of Neural Collapse during the terminal phase of deep learning training
Vardan Papyan
Xuemei Han
D. Donoho
147
563
0
18 Aug 2020
Evaluation of Neural Architectures Trained with Square Loss vs
  Cross-Entropy in Classification Tasks
Evaluation of Neural Architectures Trained with Square Loss vs Cross-Entropy in Classification Tasks
Like Hui
M. Belkin
UQCV
AAML
VLM
46
170
0
12 Jun 2020
What Makes for Good Views for Contrastive Learning?
What Makes for Good Views for Contrastive Learning?
Yonglong Tian
Chen Sun
Ben Poole
Dilip Krishnan
Cordelia Schmid
Phillip Isola
SSL
80
1,313
0
20 May 2020
Supervised Contrastive Learning
Supervised Contrastive Learning
Prannay Khosla
Piotr Teterwak
Chen Wang
Aaron Sarna
Yonglong Tian
Phillip Isola
Aaron Maschinot
Ce Liu
Dilip Krishnan
SSL
127
4,505
0
23 Apr 2020
A unifying mutual information view of metric learning: cross-entropy vs.
  pairwise losses
A unifying mutual information view of metric learning: cross-entropy vs. pairwise losses
Malik Boudiaf
Jérôme Rony
Imtiaz Masud Ziko
Eric Granger
M. Pedersoli
Pablo Piantanida
Ismail Ben Ayed
SSL
66
159
0
19 Mar 2020
Circle Loss: A Unified Perspective of Pair Similarity Optimization
Circle Loss: A Unified Perspective of Pair Similarity Optimization
Yifan Sun
Changmao Cheng
Yuhan Zhang
Chi Zhang
Liang Zheng
Zhongdao Wang
Yichen Wei
80
856
0
25 Feb 2020
The Group Loss for Deep Metric Learning
The Group Loss for Deep Metric Learning
Ismail Elezi
Sebastiano Vascon
Alessandro Torcinovich
Marcello Pelillo
Laura Leal-Taixe
103
50
0
01 Dec 2019
Momentum Contrast for Unsupervised Visual Representation Learning
Momentum Contrast for Unsupervised Visual Representation Learning
Kaiming He
Haoqi Fan
Yuxin Wu
Saining Xie
Ross B. Girshick
SSL
147
12,007
0
13 Nov 2019
Regularizing Neural Networks via Minimizing Hyperspherical Energy
Regularizing Neural Networks via Minimizing Hyperspherical Energy
Rongmei Lin
Weiyang Liu
Zhen Liu
Chen Feng
Zhiding Yu
James M. Rehg
Li Xiong
Le Song
66
36
0
12 Jun 2019
Large-Margin Softmax Loss for Convolutional Neural Networks
Large-Margin Softmax Loss for Convolutional Neural Networks
Weiyang Liu
Yandong Wen
Zhiding Yu
Meng Yang
CVBM
68
1,454
0
07 Dec 2016
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
388
2,922
0
15 Sep 2016
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN
3DV
691
36,599
0
25 Aug 2016
Reducing Overfitting in Deep Networks by Decorrelating Representations
Reducing Overfitting in Deep Networks by Decorrelating Representations
Michael Cogswell
Faruk Ahmed
Ross B. Girshick
C. L. Zitnick
Dhruv Batra
72
414
0
19 Nov 2015
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
259
583
0
27 Feb 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
  ImageNet Classification
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
246
18,534
0
06 Feb 2015
1