ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.14306
  4. Cited By
Adaptive Sharpness-Aware Pruning for Robust Sparse Networks

Adaptive Sharpness-Aware Pruning for Robust Sparse Networks

25 June 2023
Anna Bair
Hongxu Yin
Maying Shen
Pavlo Molchanov
J. Álvarez
ArXivPDFHTML

Papers citing "Adaptive Sharpness-Aware Pruning for Robust Sparse Networks"

16 / 16 papers shown
Title
FairSAM: Fair Classification on Corrupted Data Through Sharpness-Aware Minimization
FairSAM: Fair Classification on Corrupted Data Through Sharpness-Aware Minimization
Yucong Dai
Jie Ji
Xiaolong Ma
Yongkai Wu
53
0
0
29 Mar 2025
Holistic Adversarially Robust Pruning
Holistic Adversarially Robust Pruning
Qi Zhao
Christian Wressnegger
85
8
0
19 Dec 2024
Layer Pruning with Consensus: A Triple-Win Solution
Layer Pruning with Consensus: A Triple-Win Solution
Leandro Giusti Mugnaini
Carolina Tavares Duarte
Anna H. Reali Costa
Artur Jordao
73
0
0
21 Nov 2024
More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed Routing
More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed Routing
Sagi Shaier
Francisco Pereira
K. Wense
Lawrence E Hunter
Matt Jones
MoE
46
0
0
10 Oct 2024
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
Boqian Wu
Q. Xiao
Shunxin Wang
N. Strisciuglio
Mykola Pechenizkiy
M. V. Keulen
D. Mocanu
Elena Mocanu
OOD
3DH
52
0
0
03 Oct 2024
MoreauPruner: Robust Pruning of Large Language Models against Weight
  Perturbations
MoreauPruner: Robust Pruning of Large Language Models against Weight Perturbations
Zixiao Wang
Jingwei Zhang
Wenqian Zhao
Farzan Farnia
Bei Yu
AAML
35
3
0
11 Jun 2024
Effective Layer Pruning Through Similarity Metric Perspective
Effective Layer Pruning Through Similarity Metric Perspective
Ian Pons
Bruno Yamamoto
Anna H. Reali Costa
Artur Jordao
46
2
0
27 May 2024
An Adaptive Policy to Employ Sharpness-Aware Minimization
An Adaptive Policy to Employ Sharpness-Aware Minimization
Weisen Jiang
Hansi Yang
Yu Zhang
James T. Kwok
AAML
83
31
0
28 Apr 2023
Structural Pruning via Latency-Saliency Knapsack
Structural Pruning via Latency-Saliency Knapsack
Maying Shen
Hongxu Yin
Pavlo Molchanov
Lei Mao
Jianna Liu
J. Álvarez
37
47
0
13 Oct 2022
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More
  Compressible Models
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models
Clara Na
Sanket Vaibhav Mehta
Emma Strubell
64
19
0
25 May 2022
SWAD: Domain Generalization by Seeking Flat Minima
SWAD: Domain Generalization by Seeking Flat Minima
Junbum Cha
Sanghyuk Chun
Kyungjae Lee
Han-Cheol Cho
Seunghyun Park
Yunsung Lee
Sungrae Park
MoMe
216
423
0
17 Feb 2021
Hessian-Aware Pruning and Optimal Neural Implant
Hessian-Aware Pruning and Optimal Neural Implant
Shixing Yu
Z. Yao
A. Gholami
Zhen Dong
Sehoon Kim
Michael W. Mahoney
Kurt Keutzer
54
59
0
22 Jan 2021
Channel Pruning via Automatic Structure Search
Channel Pruning via Automatic Structure Search
Mingbao Lin
Rongrong Ji
Yu-xin Zhang
Baochang Zhang
Yongjian Wu
Yonghong Tian
76
241
0
23 Jan 2020
NetAdapt: Platform-Aware Neural Network Adaptation for Mobile
  Applications
NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications
Tien-Ju Yang
Andrew G. Howard
Bo Chen
Xiao Zhang
Alec Go
Mark Sandler
Vivienne Sze
Hartwig Adam
90
515
0
09 Apr 2018
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,567
0
17 Apr 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,890
0
15 Sep 2016
1