ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.11500
  4. Cited By
A Red Teaming Framework for Securing AI in Maritime Autonomous Systems

A Red Teaming Framework for Securing AI in Maritime Autonomous Systems

8 December 2023
Mathew J. Walter
Aaron Barrett
Kimberly Tam
ArXiv (abs)PDFHTML

Papers citing "A Red Teaming Framework for Securing AI in Maritime Autonomous Systems"

22 / 22 papers shown
Title
Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation
Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation
Tong Wu
Tianhao Wang
Vikash Sehwag
Saeed Mahloujifar
Prateek Mittal
AAML
70
39
0
22 Jul 2022
I Know What You Trained Last Summer: A Survey on Stealing Machine
  Learning Models and Defences
I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
95
111
0
16 Jun 2022
ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
Maura Pintor
Daniele Angioni
Angelo Sotgiu
Christian Scano
Ambra Demontis
Battista Biggio
Fabio Roli
AAML
82
52
0
07 Mar 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
62
37
0
21 Feb 2022
Segment and Complete: Defending Object Detectors against Adversarial
  Patch Attacks with Robust Patch Detection
Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection
Jiangjiang Liu
Alexander Levine
Chun Pong Lau
Ramalingam Chellappa
Soheil Feizi
AAML
59
78
0
08 Dec 2021
Exploiting Explanations for Model Inversion Attacks
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
72
84
0
26 Apr 2021
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
  Data Poisoning Attacks
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAMLTDI
84
164
0
22 Jun 2020
Arms Race in Adversarial Malware Detection: A Survey
Arms Race in Adversarial Malware Detection: A Survey
Deqiang Li
Qianmu Li
Yanfang Ye
Shouhuai Xu
AAML
67
52
0
24 May 2020
Backdoor Attacks against Transfer Learning with Pre-trained Deep
  Learning Models
Backdoor Attacks against Transfer Learning with Pre-trained Deep Learning Models
Shuo Wang
Surya Nepal
Carsten Rudolph
M. Grobler
Shangyu Chen
Tianle Chen
AAML
47
103
0
10 Jan 2020
On the Detection of Digital Face Manipulation
On the Detection of Digital Face Manipulation
H. Dang
Anand Balakrishnan
J. Stehouwer
Connor Christopherson
David Wingate
CVBMAAML
101
548
0
03 Oct 2019
Role of Spatial Context in Adversarial Robustness for Object Detection
Role of Spatial Context in Adversarial Robustness for Object Detection
Aniruddha Saha
Akshayvarun Subramanya
Koninika Patil
Hamed Pirsiavash
ObjDAAML
78
53
0
30 Sep 2019
DPatch: An Adversarial Patch Attack on Object Detectors
DPatch: An Adversarial Patch Attack on Object Detectors
Xin Liu
Huanrui Yang
Ziwei Liu
Linghao Song
Hai Helen Li
Yiran Chen
AAMLObjD
57
293
0
05 Jun 2018
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using
  Generative Models
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
Pouya Samangouei
Maya Kabkab
Rama Chellappa
AAMLGAN
84
1,178
0
17 May 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
D. Song
AAMLSILM
143
1,840
0
15 Dec 2017
Defense against Adversarial Attacks Using High-Level Representation
  Guided Denoiser
Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser
Fangzhou Liao
Ming Liang
Yinpeng Dong
Tianyu Pang
Xiaolin Hu
Jun Zhu
83
886
0
08 Dec 2017
Certified Defenses for Data Poisoning Attacks
Certified Defenses for Data Poisoning Attacks
Jacob Steinhardt
Pang Wei Koh
Percy Liang
AAML
95
755
0
09 Jun 2017
Ensemble Adversarial Training: Attacks and Defenses
Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr
Alexey Kurakin
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
177
2,725
0
19 May 2017
DeepSurv: Personalized Treatment Recommender System Using A Cox
  Proportional Hazards Deep Neural Network
DeepSurv: Personalized Treatment Recommender System Using A Cox Proportional Hazards Deep Neural Network
Jared Katzman
Uri Shaham
Jonathan Bates
A. Cloninger
Tingting Jiang
Y. Kluger
BDLCMLOOD
482
1,255
0
02 Jun 2016
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAMLGAN
277
19,066
0
20 Dec 2014
Security Evaluation of Support Vector Machines in Adversarial
  Environments
Security Evaluation of Support Vector Machines in Adversarial Environments
Battista Biggio
Igino Corona
B. Nelson
Benjamin I. P. Rubinstein
Davide Maiorca
Giorgio Fumera
Giorgio Giacinto
and Fabio Roli
AAML
64
125
0
30 Jan 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
275
14,927
1
21 Dec 2013
Poisoning Attacks against Support Vector Machines
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
112
1,590
0
27 Jun 2012
1