ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.05553
  4. Cited By
Principal Components Bias in Over-parameterized Linear Models, and its
  Manifestation in Deep Neural Networks
v1v2v3v4v5v6v7v8 (latest)

Principal Components Bias in Over-parameterized Linear Models, and its Manifestation in Deep Neural Networks

12 May 2021
Guy Hacohen
D. Weinshall
ArXiv (abs)PDFHTML

Papers citing "Principal Components Bias in Over-parameterized Linear Models, and its Manifestation in Deep Neural Networks"

4 / 4 papers shown
Title
Many Perception Tasks are Highly Redundant Functions of their Input Data
Many Perception Tasks are Highly Redundant Functions of their Input Data
Rahul Ramesh
Anthony Bisulco
Ronald W. DiTullio
Linran Wei
Vijay Balasubramanian
Kostas Daniilidis
Pratik Chaudhari
108
2
0
18 Jul 2024
Linear CNNs Discover the Statistical Structure of the Dataset Using Only
  the Most Dominant Frequencies
Linear CNNs Discover the Statistical Structure of the Dataset Using Only the Most Dominant Frequencies
Hannah Pinson
Joeri Lenaerts
V. Ginis
53
3
0
03 Mar 2023
Simplicity Bias in 1-Hidden Layer Neural Networks
Simplicity Bias in 1-Hidden Layer Neural Networks
Depen Morwani
Jatin Batra
Prateek Jain
Praneeth Netrapalli
103
21
0
01 Feb 2023
When Deep Classifiers Agree: Analyzing Correlations between Learning
  Order and Image Statistics
When Deep Classifiers Agree: Analyzing Correlations between Learning Order and Image Statistics
Iuliia Pliushch
Martin Mundt
Nicolas Lupp
Visvanathan Ramesh
85
12
0
19 May 2021
1