ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.01217
10
8

Making Coherence Out of Nothing At All: Measuring the Evolution of Gradient Alignment

3 August 2020
S. Chatterjee
Piotr Zielinski
ArXivPDFHTML
Abstract

We propose a new metric (mmm-coherence) to experimentally study the alignment of per-example gradients during training. Intuitively, given a sample of size mmm, mmm-coherence is the number of examples in the sample that benefit from a small step along the gradient of any one example on average. We show that compared to other commonly used metrics, mmm-coherence is more interpretable, cheaper to compute (O(m)O(m)O(m) instead of O(m2)O(m^2)O(m2)) and mathematically cleaner. (We note that mmm-coherence is closely connected to gradient diversity, a quantity previously used in some theoretical bounds.) Using mmm-coherence, we study the evolution of alignment of per-example gradients in ResNet and Inception models on ImageNet and several variants with label noise, particularly from the perspective of the recently proposed Coherent Gradients (CG) theory that provides a simple, unified explanation for memorization and generalization [Chatterjee, ICLR 20]. Although we have several interesting takeaways, our most surprising result concerns memorization. Naively, one might expect that when training with completely random labels, each example is fitted independently, and so mmm-coherence should be close to 1. However, this is not the case: mmm-coherence reaches much higher values during training (100s), indicating that over-parameterized neural networks find common patterns even in scenarios where generalization is not possible. A detailed analysis of this phenomenon provides both a deeper confirmation of CG, but at the same point puts into sharp relief what is missing from the theory in order to provide a complete explanation of generalization in neural networks.

View on arXiv
Comments on this paper