ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.09960
  4. Cited By
Using Pre-Training Can Improve Model Robustness and Uncertainty

Using Pre-Training Can Improve Model Robustness and Uncertainty

28 January 2019
Dan Hendrycks
Kimin Lee
Mantas Mazeika
    NoLa
ArXivPDFHTML

Papers citing "Using Pre-Training Can Improve Model Robustness and Uncertainty"

5 / 155 papers shown
Title
Intriguing properties of adversarial training at scale
Intriguing properties of adversarial training at scale
Cihang Xie
Alan Yuille
AAML
13
68
0
10 Jun 2019
Provably Robust Deep Learning via Adversarially Trained Smoothed
  Classifiers
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
Hadi Salman
Greg Yang
Jungshian Li
Pengchuan Zhang
Huan Zhang
Ilya P. Razenshteyn
Sébastien Bubeck
AAML
33
536
0
09 Jun 2019
Improving Robustness Without Sacrificing Accuracy with Patch Gaussian
  Augmentation
Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation
Raphael Gontijo-Lopes
Dong Yin
Ben Poole
Justin Gilmer
E. D. Cubuk
AAML
33
204
0
06 Jun 2019
MMA Training: Direct Input Space Margin Maximization through Adversarial
  Training
MMA Training: Direct Input Space Margin Maximization through Adversarial Training
G. Ding
Yash Sharma
Kry Yik-Chau Lui
Ruitong Huang
AAML
16
270
0
06 Dec 2018
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
296
3,112
0
04 Nov 2016
Previous
1234