ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.09222
  4. Cited By
Stochastic Re-weighted Gradient Descent via Distributionally Robust
  Optimization

Stochastic Re-weighted Gradient Descent via Distributionally Robust Optimization

15 June 2023
Ramnath Kumar
Kushal Majmundar
Dheeraj M. Nagaraj
A. Suggala
    ODL
ArXivPDFHTML

Papers citing "Stochastic Re-weighted Gradient Descent via Distributionally Robust Optimization"

15 / 15 papers shown
Title
Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining
Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining
Daouda Sow
Herbert Woisetschläger
Saikiran Bulusu
Shiqiang Wang
Hans-Arno Jacobsen
Yingbin Liang
59
0
0
10 Feb 2025
Responsible AI (RAI) Games and Ensembles
Responsible AI (RAI) Games and Ensembles
Yash Gupta
Runtian Zhai
A. Suggala
Pradeep Ravikumar
18
0
0
28 Oct 2023
A Challenge in Reweighting Data with Bilevel Optimization
A Challenge in Reweighting Data with Bilevel Optimization
Anastasia Ivanova
Pierre Ablin
37
1
0
26 Oct 2023
EHI: End-to-end Learning of Hierarchical Index for Efficient Dense
  Retrieval
EHI: End-to-end Learning of Hierarchical Index for Efficient Dense Retrieval
Ramnath Kumar
Anshul Mittal
Nilesh Gupta
Aditya Kusupati
Inderjit Dhillon
Prateek Jain
38
0
0
13 Oct 2023
Fairness under Covariate Shift: Improving Fairness-Accuracy tradeoff
  with few Unlabeled Test Samples
Fairness under Covariate Shift: Improving Fairness-Accuracy tradeoff with few Unlabeled Test Samples
Shreyas Havaldar
Jatin Chauhan
Karthikeyan Shanmugam
Jay Nandy
A. Raghuveer
46
1
0
11 Oct 2023
Revisiting Gradient Clipping: Stochastic bias and tight convergence
  guarantees
Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees
Anastasia Koloskova
Hadrien Hendrikx
Sebastian U. Stich
109
49
0
02 May 2023
Stochastic Constrained DRO with a Complexity Independent of Sample Size
Stochastic Constrained DRO with a Complexity Independent of Sample Size
Q. Qi
Jiameng Lyu
Kung-Sik Chan
E. Bai
Tianbao Yang
50
15
0
11 Oct 2022
Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in
  Neural Networks
Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Sravanti Addepalli
Anshul Nasery
R. Venkatesh Babu
Praneeth Netrapalli
Prateek Jain
AAML
38
3
0
04 Oct 2022
On Tilted Losses in Machine Learning: Theory and Applications
On Tilted Losses in Machine Learning: Theory and Applications
Tian Li
Ahmad Beirami
Maziar Sanjabi
Virginia Smith
55
38
0
13 Sep 2021
Curriculum Learning: A Survey
Curriculum Learning: A Survey
Petru Soviany
Radu Tudor Ionescu
Paolo Rota
N. Sebe
ODL
76
342
0
25 Jan 2021
Out-of-Distribution Generalization via Risk Extrapolation (REx)
Out-of-Distribution Generalization via Risk Extrapolation (REx)
David M. Krueger
Ethan Caballero
J. Jacobsen
Amy Zhang
Jonathan Binas
Dinghuai Zhang
Rémi Le Priol
Aaron Courville
OOD
215
901
0
02 Mar 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
341
11,684
0
09 Mar 2017
Domain-Adversarial Training of Neural Networks
Domain-Adversarial Training of Neural Networks
Yaroslav Ganin
E. Ustinova
Hana Ajakan
Pascal Germain
Hugo Larochelle
François Laviolette
M. Marchand
Victor Lempitsky
GAN
OOD
177
9,332
0
28 May 2015
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
101
570
0
08 Dec 2012
SMOTE: Synthetic Minority Over-sampling Technique
SMOTE: Synthetic Minority Over-sampling Technique
Nitesh V. Chawla
Kevin W. Bowyer
Lawrence Hall
W. Kegelmeyer
AI4TS
163
25,256
0
09 Jun 2011
1