ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.07052
  4. Cited By
Orthogonalising gradients to speed up neural network optimisation

Orthogonalising gradients to speed up neural network optimisation

14 February 2022
Mark Tuddenham
Adam Prugel-Bennett
Jonathan Hare
    ODL
ArXivPDFHTML

Papers citing "Orthogonalising gradients to speed up neural network optimisation"

7 / 7 papers shown
Title
Understanding Gradient Orthogonalization for Deep Learning via Non-Euclidean Trust-Region Optimization
Understanding Gradient Orthogonalization for Deep Learning via Non-Euclidean Trust-Region Optimization
Dmitry Kovalev
100
4
0
16 Mar 2025
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
Jure Zbontar
Li Jing
Ishan Misra
Yann LeCun
Stéphane Deny
SSL
296
2,338
0
04 Mar 2021
Gradient Surgery for Multi-Task Learning
Gradient Surgery for Multi-Task Learning
Tianhe Yu
Saurabh Kumar
Abhishek Gupta
Sergey Levine
Karol Hausman
Chelsea Finn
161
1,211
0
19 Jan 2020
Orthogonal Convolutional Neural Networks
Orthogonal Convolutional Neural Networks
Jiayun Wang
Yubei Chen
Rudrasis Chakraborty
Stella X. Yu
72
189
0
27 Nov 2019
Orthogonal Deep Neural Networks
Orthogonal Deep Neural Networks
Kui Jia
Shuai Li
Yuxin Wen
Tongliang Liu
Dacheng Tao
68
134
0
15 May 2019
Can We Gain More from Orthogonality Regularizations in Training Deep
  CNNs?
Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?
Nitin Bansal
Xiaohan Chen
Zhangyang Wang
OOD
79
188
0
22 Oct 2018
Large Batch Training of Convolutional Networks
Large Batch Training of Convolutional Networks
Yang You
Igor Gitman
Boris Ginsburg
ODL
128
848
0
13 Aug 2017
1