ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.03171
  4. Cited By
High-Dimensional Kernel Methods under Covariate Shift: Data-Dependent
  Implicit Regularization

High-Dimensional Kernel Methods under Covariate Shift: Data-Dependent Implicit Regularization

5 June 2024
Yihang Chen
Fanghui Liu
Taiji Suzuki
Volkan Cevher
ArXiv (abs)PDFHTML

Papers citing "High-Dimensional Kernel Methods under Covariate Shift: Data-Dependent Implicit Regularization"

12 / 12 papers shown
Title
Understanding Why Generalized Reweighting Does Not Improve Over ERM
Understanding Why Generalized Reweighting Does Not Improve Over ERM
Runtian Zhai
Chen Dan
Zico Kolter
Pradeep Ravikumar
OOD
59
28
0
28 Jan 2022
Covariate Shift in High-Dimensional Random Feature Regression
Covariate Shift in High-Dimensional Random Feature Regression
Nilesh Tripuraneni
Ben Adlam
Jeffrey Pennington
OOD
45
24
0
16 Nov 2021
Learning with invariances in random features and kernel models
Learning with invariances in random features and kernel models
Song Mei
Theodor Misiakiewicz
Andrea Montanari
OOD
100
91
0
25 Feb 2021
Generalization error of random features and kernel methods:
  hypercontractivity and kernel matrix concentration
Generalization error of random features and kernel methods: hypercontractivity and kernel matrix concentration
Song Mei
Theodor Misiakiewicz
Andrea Montanari
84
112
0
26 Jan 2021
When Do Neural Networks Outperform Kernel Methods?
When Do Neural Networks Outperform Kernel Methods?
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
91
189
0
24 Jun 2020
On the Optimal Weighted $\ell_2$ Regularization in Overparameterized
  Linear Regression
On the Optimal Weighted ℓ2\ell_2ℓ2​ Regularization in Overparameterized Linear Regression
Denny Wu
Ji Xu
70
123
0
10 Jun 2020
Rethinking Importance Weighting for Deep Learning under Distribution
  Shift
Rethinking Importance Weighting for Deep Learning under Distribution Shift
Tongtong Fang
Nan Lu
Gang Niu
Masashi Sugiyama
67
139
0
08 Jun 2020
Linearized two-layers neural networks in high dimension
Linearized two-layers neural networks in high dimension
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
60
243
0
27 Apr 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
194
746
0
19 Mar 2019
Marginal Singularity, and the Benefits of Labels in Covariate-Shift
Marginal Singularity, and the Benefits of Labels in Covariate-Shift
Samory Kpotufe
Guillaume Martinet
141
96
0
05 Mar 2018
Orthogonal Random Features
Orthogonal Random Features
Felix X. Yu
A. Suresh
K. Choromanski
D. Holtmann-Rice
Sanjiv Kumar
86
222
0
28 Oct 2016
Ridge regression and asymptotic minimax estimation over spheres of
  growing dimension
Ridge regression and asymptotic minimax estimation over spheres of growing dimension
Lee H. Dicker
74
74
0
15 Jan 2016
1