ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.12376
  4. Cited By
Matching the Statistical Query Lower Bound for k-sparse Parity Problems
  with Stochastic Gradient Descent

Matching the Statistical Query Lower Bound for k-sparse Parity Problems with Stochastic Gradient Descent

18 April 2024
Yiwen Kou
Zixiang Chen
Quanquan Gu
Sham Kakade
ArXiv (abs)PDFHTML

Papers citing "Matching the Statistical Query Lower Bound for k-sparse Parity Problems with Stochastic Gradient Descent"

12 / 12 papers shown
Title
Low-dimensional Functions are Efficiently Learnable under Randomly Biased Distributions
Elisabetta Cornacchia
Dan Mikulincer
Elchanan Mossel
126
1
0
10 Feb 2025
From Sparse Dependence to Sparse Attention: Unveiling How Chain-of-Thought Enhances Transformer Sample Efficiency
From Sparse Dependence to Sparse Attention: Unveiling How Chain-of-Thought Enhances Transformer Sample Efficiency
Kaiyue Wen
Huaqing Zhang
Hongzhou Lin
Jingzhao Zhang
MoELRM
157
7
0
07 Oct 2024
Repetita Iuvant: Data Repetition Allows SGD to Learn High-Dimensional Multi-Index Functions
Repetita Iuvant: Data Repetition Allows SGD to Learn High-Dimensional Multi-Index Functions
Luca Arnaboldi
Yatin Dandi
Florent Krzakala
Luca Pesce
Ludovic Stephan
111
18
0
24 May 2024
Benign Overfitting and Grokking in ReLU Networks for XOR Cluster Data
Benign Overfitting and Grokking in ReLU Networks for XOR Cluster Data
Zhiwei Xu
Yutong Wang
Spencer Frei
Gal Vardi
Wei Hu
MLT
80
28
0
04 Oct 2023
Provable Advantage of Curriculum Learning on Parity Targets with Mixed
  Inputs
Provable Advantage of Curriculum Learning on Parity Targets with Mixed Inputs
Emmanuel Abbe
Elisabetta Cornacchia
Aryo Lotfi
77
11
0
29 Jun 2023
On the non-universality of deep learning: quantifying the cost of
  symmetry
On the non-universality of deep learning: quantifying the cost of symmetry
Emmanuel Abbe
Enric Boix-Adserà
FedMLMLT
68
19
0
05 Aug 2022
Hidden Progress in Deep Learning: SGD Learns Parities Near the
  Computational Limit
Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit
Boaz Barak
Benjamin L. Edelman
Surbhi Goel
Sham Kakade
Eran Malach
Cyril Zhang
108
133
0
18 Jul 2022
Random Feature Amplification: Feature Learning and Generalization in
  Neural Networks
Random Feature Amplification: Feature Learning and Generalization in Neural Networks
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
MLT
92
30
0
15 Feb 2022
Learning Parities with Neural Networks
Learning Parities with Neural Networks
Amit Daniely
Eran Malach
80
78
0
18 Feb 2020
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks
  Trained with the Logistic Loss
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Lénaïc Chizat
Francis R. Bach
MLT
151
341
0
11 Feb 2020
Regularization Matters: Generalization and Optimization of Neural Nets
  v.s. their Induced Kernel
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel
Colin Wei
Jason D. Lee
Qiang Liu
Tengyu Ma
248
245
0
12 Oct 2018
A Mean Field View of the Landscape of Two-Layers Neural Networks
A Mean Field View of the Landscape of Two-Layers Neural Networks
Song Mei
Andrea Montanari
Phan-Minh Nguyen
MLT
105
862
0
18 Apr 2018
1