ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.03601
  4. Cited By
Using Randomness to Improve Robustness of Machine-Learning Models
  Against Evasion Attacks

Using Randomness to Improve Robustness of Machine-Learning Models Against Evasion Attacks

10 August 2018
Fan Yang
Zhiyuan Chen
    AAML
ArXivPDFHTML

Papers citing "Using Randomness to Improve Robustness of Machine-Learning Models Against Evasion Attacks"

5 / 5 papers shown
Title
On the vulnerability of fingerprint verification systems to fake
  fingerprint attacks
On the vulnerability of fingerprint verification systems to fake fingerprint attacks
J. Galbally-Herrero
Julian Fierrez-Aguilar
J.D. Rodriguez-Gonzalez
F. Alonso-Fernandez
J. Ortega-Garcia
M. Tapiador
26
83
0
11 Jul 2022
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
118
1,772
0
22 Aug 2017
Robust Physical-World Attacks on Deep Learning Models
Robust Physical-World Attacks on Deep Learning Models
Kevin Eykholt
Ivan Evtimov
Earlence Fernandes
Yue Liu
Amir Rahmati
Chaowei Xiao
Atul Prakash
Tadayoshi Kohno
D. Song
AAML
50
595
0
27 Jul 2017
Stealing Machine Learning Models via Prediction APIs
Stealing Machine Learning Models via Prediction APIs
Florian Tramèr
Fan Zhang
Ari Juels
Michael K. Reiter
Thomas Ristenpart
SILM
MLAU
102
1,805
0
09 Sep 2016
Adversarial Perturbations Against Deep Neural Networks for Malware
  Classification
Adversarial Perturbations Against Deep Neural Networks for Malware Classification
Kathrin Grosse
Nicolas Papernot
Praveen Manoharan
Michael Backes
Patrick McDaniel
AAML
64
418
0
14 Jun 2016
1