ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.03996
  4. Cited By
Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning
v1v2v3v4 (latest)

Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning

8 June 2022
Momin Abbas
Quan-Wu Xiao
Lisha Chen
Pin-Yu Chen
Tianyi Chen
ArXiv (abs)PDFHTMLGithub (31★)

Papers citing "Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning"

11 / 61 papers shown
Title
Sharp Minima Can Generalize For Deep Nets
Sharp Minima Can Generalize For Deep Nets
Laurent Dinh
Razvan Pascanu
Samy Bengio
Yoshua Bengio
ODL
145
774
0
15 Mar 2017
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
833
11,961
0
09 Mar 2017
Learning to Remember Rare Events
Learning to Remember Rare Events
Lukasz Kaiser
Ofir Nachum
Aurko Roy
Samy Bengio
RALMCLL
134
364
0
09 Mar 2017
Meta Networks
Meta Networks
Tsendsuren Munkhdalai
Hong-ye Yu
GNNAI4CE
113
1,069
0
02 Mar 2017
Learning to reinforcement learn
Learning to reinforcement learn
Jane X. Wang
Z. Kurth-Nelson
Dhruva Tirumala
Hubert Soyer
Joel Z Leibo
Rémi Munos
Charles Blundell
D. Kumaran
M. Botvinick
OffRL
97
984
0
17 Nov 2016
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
436
2,946
0
15 Sep 2016
Learning to learn by gradient descent by gradient descent
Learning to learn by gradient descent by gradient descent
Marcin Andrychowicz
Misha Denil
Sergio Gomez Colmenarejo
Matthew W. Hoffman
David Pfau
Tom Schaul
Brendan Shillingford
Nando de Freitas
132
2,008
0
14 Jun 2016
Matching Networks for One Shot Learning
Matching Networks for One Shot Learning
Oriol Vinyals
Charles Blundell
Timothy Lillicrap
Koray Kavukcuoglu
Daan Wierstra
VLM
378
7,343
0
13 Jun 2016
Towards a Neural Statistician
Towards a Neural Statistician
Harrison Edwards
Amos Storkey
BDL
102
427
0
07 Jun 2016
Train faster, generalize better: Stability of stochastic gradient
  descent
Train faster, generalize better: Stability of stochastic gradient descent
Moritz Hardt
Benjamin Recht
Y. Singer
118
1,243
0
03 Sep 2015
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
469
43,357
0
11 Feb 2015
Previous
12