ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.01475
  4. Cited By
Over-training with Mixup May Hurt Generalization

Over-training with Mixup May Hurt Generalization

2 March 2023
Zixuan Liu
Ziqiao Wang
Hongyu Guo
Yongyi Mao
    NoLa
ArXivPDFHTML

Papers citing "Over-training with Mixup May Hurt Generalization"

14 / 14 papers shown
Title
Generalizable Prompt Tuning for Vision-Language Models
Generalizable Prompt Tuning for Vision-Language Models
Qian Zhang
VLM
VPVLM
52
0
0
04 Oct 2024
Harnessing Shared Relations via Multimodal Mixup Contrastive Learning
  for Multimodal Classification
Harnessing Shared Relations via Multimodal Mixup Contrastive Learning for Multimodal Classification
Raja Kumar
Raghav Singhal
Pranamya Kulkarni
Deval Mehta
Kshitij Jadhav
23
0
0
26 Sep 2024
MaskMatch: Boosting Semi-Supervised Learning Through Mask
  Autoencoder-Driven Feature Learning
MaskMatch: Boosting Semi-Supervised Learning Through Mask Autoencoder-Driven Feature Learning
Wenjin Zhang
Keyi Li
Sen Yang
Chenyang Gao
Wanzhao Yang
Sifan Yuan
I. Marsic
36
1
0
10 May 2024
Tailoring Mixup to Data for Calibration
Tailoring Mixup to Data for Calibration
Quentin Bouniot
Pavlo Mozharovskyi
Florence dÁlché-Buc
61
1
0
02 Nov 2023
Semantic Equivariant Mixup
Semantic Equivariant Mixup
Zongbo Han
Tianchi Xie
Bing Wu
Qinghua Hu
Changqing Zhang
AAML
70
0
0
12 Aug 2023
Reconstructing the Mind's Eye: fMRI-to-Image with Contrastive Learning
  and Diffusion Priors
Reconstructing the Mind's Eye: fMRI-to-Image with Contrastive Learning and Diffusion Priors
Paul S. Scotti
Atmadeep Banerjee
J. Goode
Stepan Shabalin
A. Nguyen
...
Nathalie Verlinde
Elad Yundler
David Weisberg
K. A. Norman
Tanishq Mathew Abraham
DiffM
36
106
0
29 May 2023
Selective Mixup Helps with Distribution Shifts, But Not (Only) because
  of Mixup
Selective Mixup Helps with Distribution Shifts, But Not (Only) because of Mixup
Damien Teney
Jindong Wang
Ehsan Abbasnejad
30
6
0
26 May 2023
Semi-Supervised Graph Imbalanced Regression
Semi-Supervised Graph Imbalanced Regression
Gang Liu
Tong Zhao
Eric Inae
Te Luo
Meng Jiang
24
19
0
20 May 2023
Supervision Interpolation via LossMix: Generalizing Mixup for Object
  Detection and Beyond
Supervision Interpolation via LossMix: Generalizing Mixup for Object Detection and Beyond
Thanh Vu
Baochen Sun
Bodi Yuan
Alex Ngai
Yueqi Li
Jan-Michael Frahm
20
1
0
18 Mar 2023
Multi-scale Feature Learning Dynamics: Insights for Double Descent
Multi-scale Feature Learning Dynamics: Insights for Double Descent
Mohammad Pezeshki
Amartya Mitra
Yoshua Bengio
Guillaume Lajoie
61
25
0
06 Dec 2021
Neural Network Weights Do Not Converge to Stationary Points: An
  Invariant Measure Perspective
Neural Network Weights Do Not Converge to Stationary Points: An Invariant Measure Perspective
Junzhe Zhang
Haochuan Li
S. Sra
Ali Jadbabaie
66
9
0
12 Oct 2021
Label Noise in Adversarial Training: A Novel Perspective to Study Robust
  Overfitting
Label Noise in Adversarial Training: A Novel Perspective to Study Robust Overfitting
Chengyu Dong
Liyuan Liu
Jingbo Shang
NoLa
AAML
56
18
0
07 Oct 2021
On the Generalization of Models Trained with SGD: Information-Theoretic
  Bounds and Implications
On the Generalization of Models Trained with SGD: Information-Theoretic Bounds and Implications
Ziqiao Wang
Yongyi Mao
FedML
MLT
37
22
0
07 Oct 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
287
2,890
0
15 Sep 2016
1