ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.02032
  4. Cited By
Killing four birds with one Gaussian process: the relation between
  different test-time attacks
v1v2v3 (latest)

Killing four birds with one Gaussian process: the relation between different test-time attacks

6 June 2018
Kathrin Grosse
M. Smith
Michael Backes
    AAML
ArXiv (abs)PDFHTML

Papers citing "Killing four birds with one Gaussian process: the relation between different test-time attacks"

15 / 15 papers shown
Title
The Limitations of Model Uncertainty in Adversarial Settings
The Limitations of Model Uncertainty in Adversarial Settings
Kathrin Grosse
David Pfaff
M. Smith
Michael Backes
AAML
44
34
0
06 Dec 2018
Adversarial vulnerability for any classifier
Adversarial vulnerability for any classifier
Alhussein Fawzi
Hamza Fawzi
Omar Fawzi
AAML
87
251
0
23 Feb 2018
DARTS: Deceiving Autonomous Cars with Toxic Signs
DARTS: Deceiving Autonomous Cars with Toxic Signs
Chawin Sitawarin
A. Bhagoji
Arsalan Mosenia
M. Chiang
Prateek Mittal
AAML
91
235
0
18 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
224
3,186
0
01 Feb 2018
Deep Neural Networks as Gaussian Processes
Deep Neural Networks as Gaussian Processes
Jaehoon Lee
Yasaman Bahri
Roman Novak
S. Schoenholz
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
UQCVBDL
128
1,093
0
01 Nov 2017
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
99
633
0
29 Aug 2017
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the
  iCub Humanoid
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid
Marco Melis
Ambra Demontis
Battista Biggio
Gavin Brown
Giorgio Fumera
Fabio Roli
AAML
52
98
0
23 Aug 2017
Evasion Attacks against Machine Learning at Test Time
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
157
2,153
0
21 Aug 2017
Adversarial Examples, Uncertainty, and Transfer Testing Robustness in
  Gaussian Process Hybrid Deep Networks
Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks
John Bradshaw
A. G. Matthews
Zoubin Ghahramani
BDLAAML
103
171
0
08 Jul 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
126
1,864
0
20 May 2017
Yes, Machine Learning Can Be More Secure! A Case Study on Android
  Malware Detection
Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection
Ambra Demontis
Marco Melis
Battista Biggio
Davide Maiorca
Dan Arp
Konrad Rieck
Igino Corona
Giorgio Giacinto
Fabio Roli
AAML
55
284
0
28 Apr 2017
Deep Models Under the GAN: Information Leakage from Collaborative Deep
  Learning
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
Briland Hitaj
G. Ateniese
Fernando Perez-Cruz
FedML
117
1,404
0
24 Feb 2017
Membership Inference Attacks against Machine Learning Models
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLRMIALMMIACV
261
4,135
0
18 Oct 2016
Stealing Machine Learning Models via Prediction APIs
Stealing Machine Learning Models via Prediction APIs
Florian Tramèr
Fan Zhang
Ari Juels
Michael K. Reiter
Thomas Ristenpart
SILMMLAU
107
1,807
0
09 Sep 2016
Poisoning Attacks against Support Vector Machines
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
112
1,593
0
27 Jun 2012
1