ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.00069
  4. Cited By
Adversarial Attacks and Defences: A Survey

Adversarial Attacks and Defences: A Survey

28 September 2018
Anirban Chakraborty
Manaar Alam
Vishal Dey
Anupam Chattopadhyay
Debdeep Mukhopadhyay
    AAML
    OOD
ArXivPDFHTML

Papers citing "Adversarial Attacks and Defences: A Survey"

50 / 109 papers shown
Title
Adversarial Coevolutionary Illumination with Generational Adversarial MAP-Elites
Adversarial Coevolutionary Illumination with Generational Adversarial MAP-Elites
Timothée Anne
Noah Syrkis
Meriem Elhosni
Florian Turati
Franck Legendre
Alain Jaquier
Sebastian Risi
26
0
0
10 May 2025
XBreaking: Explainable Artificial Intelligence for Jailbreaking LLMs
XBreaking: Explainable Artificial Intelligence for Jailbreaking LLMs
Marco Arazzi
Vignesh Kumar Kembu
Antonino Nocera
V. P.
82
0
0
30 Apr 2025
Valkyrie: A Response Framework to Augment Runtime Detection of Time-Progressive Attacks
Valkyrie: A Response Framework to Augment Runtime Detection of Time-Progressive Attacks
Nikhilesh Singh
Chester Rebeiro
43
0
0
21 Apr 2025
Deep Learning-based Intrusion Detection Systems: A Survey
Deep Learning-based Intrusion Detection Systems: A Survey
Zhiwei Xu
Yujuan Wu
Shiheng Wang
Jiabao Gao
Tian Qiu
Ziqi Wang
Hai Wan
Xibin Zhao
26
1
0
10 Apr 2025
Enabling AutoML for Zero-Touch Network Security: Use-Case Driven Analysis
Enabling AutoML for Zero-Touch Network Security: Use-Case Driven Analysis
Li Yang
Mirna El Rajab
Abdallah Shami
Sami Muhaidat
85
6
0
28 Feb 2025
Protego: Detecting Adversarial Examples for Vision Transformers via Intrinsic Capabilities
Protego: Detecting Adversarial Examples for Vision Transformers via Intrinsic Capabilities
Jialin Wu
Kaikai Pan
Yanjiao Chen
Jiangyi Deng
Shengyuan Pang
Wenyuan Xu
ViT
AAML
43
0
0
13 Jan 2025
On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning
On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning
Yongyi Su
Yushu Li
Nanqing Liu
Kui Jia
Xulei Yang
Chuan-Sheng Foo
Xun Xu
TTA
AAML
56
0
0
07 Oct 2024
A Cost-Aware Approach to Adversarial Robustness in Neural Networks
A Cost-Aware Approach to Adversarial Robustness in Neural Networks
Charles Meyers
Mohammad Reza Saleh Sedghpour
Tommy Löfstedt
Erik Elmroth
OOD
AAML
33
0
0
11 Sep 2024
Explainable Graph Neural Networks Under Fire
Explainable Graph Neural Networks Under Fire
Zhong Li
Simon Geisler
Yuhang Wang
Stephan Günnemann
M. Leeuwen
AAML
43
0
0
10 Jun 2024
DREW : Towards Robust Data Provenance by Leveraging Error-Controlled
  Watermarking
DREW : Towards Robust Data Provenance by Leveraging Error-Controlled Watermarking
Mehrdad Saberi
Vinu Sankar Sadasivan
Arman Zarei
Hessam Mahdavifar
S. Feizi
43
1
0
05 Jun 2024
On Robust Reinforcement Learning with Lipschitz-Bounded Policy Networks
On Robust Reinforcement Learning with Lipschitz-Bounded Policy Networks
Nicholas H. Barbara
Ruigang Wang
I. Manchester
45
4
0
19 May 2024
Semantic Stealth: Adversarial Text Attacks on NLP Using Several Methods
Semantic Stealth: Adversarial Text Attacks on NLP Using Several Methods
Roopkatha Dey
Aivy Debnath
Sayak Kumar Dutta
Kaustav Ghosh
Arijit Mitra
Arghya Roy Chowdhury
Jaydip Sen
AAML
SILM
29
1
0
08 Apr 2024
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Ziyao Liu
Huanyi Ye
Chen Chen
Yongsen Zheng
K. Lam
AAML
MU
35
28
0
20 Mar 2024
Understanding and Improving Training-free Loss-based Diffusion Guidance
Understanding and Improving Training-free Loss-based Diffusion Guidance
Yifei Shen
Xinyang Jiang
Yezhen Wang
Yifan Yang
Dongqi Han
Dongsheng Li
FaML
29
6
0
19 Mar 2024
A Language Model's Guide Through Latent Space
A Language Model's Guide Through Latent Space
Dimitri von Rutte
Sotiris Anagnostidis
Gregor Bachmann
Thomas Hofmann
45
24
0
22 Feb 2024
Semantic Sensitivities and Inconsistent Predictions: Measuring the
  Fragility of NLI Models
Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models
Erik Arakelyan
Zhaoqi Liu
Isabelle Augenstein
AAML
45
10
0
25 Jan 2024
SENet: Visual Detection of Online Social Engineering Attack Campaigns
SENet: Visual Detection of Online Social Engineering Attack Campaigns
Irfan Ozen
Karthika Subramani
Phani Vadrevu
R. Perdisci
44
2
0
10 Jan 2024
DTA: Distribution Transform-based Attack for Query-Limited Scenario
DTA: Distribution Transform-based Attack for Query-Limited Scenario
Renyang Liu
Wei Zhou
Xin Jin
Song Gao
Yuanyu Wang
Ruxin Wang
18
0
0
12 Dec 2023
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Guangjing Wang
Ce Zhou
Yuanda Wang
Bocheng Chen
Hanqing Guo
Qiben Yan
AAML
SILM
68
3
0
20 Nov 2023
A Framework for Monitoring and Retraining Language Models in Real-World
  Applications
A Framework for Monitoring and Retraining Language Models in Real-World Applications
Jaykumar Kasundra
Claudia Schulz
Melicaalsadat Mirsafian
Stavroula Skylaki
OffRL
LRM
34
1
0
16 Nov 2023
Revealing CNN Architectures via Side-Channel Analysis in Dataflow-based Inference Accelerators
Revealing CNN Architectures via Side-Channel Analysis in Dataflow-based Inference Accelerators
Hansika Weerasena
Prabhat Mishra
FedML
51
4
0
01 Nov 2023
Adversarial Attacks on Fairness of Graph Neural Networks
Adversarial Attacks on Fairness of Graph Neural Networks
Binchi Zhang
Yushun Dong
Chen Chen
Yada Zhu
Minnan Luo
Jundong Li
43
3
0
20 Oct 2023
HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds
HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds
Hejia Geng
Peng Li
AAML
34
3
0
20 Aug 2023
Enhancing the Antidote: Improved Pointwise Certifications against
  Poisoning Attacks
Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks
Shijie Liu
Andrew C. Cullen
Paul Montague
S. Erfani
Benjamin I. P. Rubinstein
AAML
26
3
0
15 Aug 2023
Jailbroken: How Does LLM Safety Training Fail?
Jailbroken: How Does LLM Safety Training Fail?
Alexander Wei
Nika Haghtalab
Jacob Steinhardt
110
852
0
05 Jul 2023
A Comprehensive Study on the Robustness of Image Classification and
  Object Detection in Remote Sensing: Surveying and Benchmarking
A Comprehensive Study on the Robustness of Image Classification and Object Detection in Remote Sensing: Surveying and Benchmarking
Shaohui Mei
Jiawei Lian
Xiaofei Wang
Yuru Su
Mingyang Ma
Lap-Pui Chau
AAML
23
11
0
21 Jun 2023
NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations
NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations
Y. Fu
Ye Yuan
Souvik Kundu
Shang Wu
Shunyao Zhang
Yingyan Lin
AAML
68
6
0
10 Jun 2023
Modeling Adversarial Attack on Pre-trained Language Models as Sequential
  Decision Making
Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making
Xuanjie Fang
Sijie Cheng
Yang Liu
Wen Wang
AAML
36
9
0
27 May 2023
Fantastic DNN Classifiers and How to Identify them without Data
Fantastic DNN Classifiers and How to Identify them without Data
Nathaniel R. Dean
D. Sarkar
23
1
0
24 May 2023
Causality-Aided Trade-off Analysis for Machine Learning Fairness
Causality-Aided Trade-off Analysis for Machine Learning Fairness
Zhenlan Ji
Pingchuan Ma
Shuai Wang
Yanhui Li
FaML
34
7
0
22 May 2023
Security and Privacy Issues for Urban Smart Traffic Infrastructure
Anubhab Baksi
A. I. S. Khalil
Anupam Chattopadhyay
21
0
0
17 Apr 2023
To be Robust and to be Fair: Aligning Fairness with Robustness
To be Robust and to be Fair: Aligning Fairness with Robustness
Junyi Chai
Xiaoqian Wang
52
2
0
31 Mar 2023
It Is All About Data: A Survey on the Effects of Data on Adversarial
  Robustness
It Is All About Data: A Survey on the Effects of Data on Adversarial Robustness
Peiyu Xiong
Michael W. Tegegn
Jaskeerat Singh Sarin
Shubhraneel Pal
Julia Rubin
SILM
AAML
32
8
0
17 Mar 2023
Adversarial Attacks on Machine Learning in Embedded and IoT Platforms
Adversarial Attacks on Machine Learning in Embedded and IoT Platforms
Christian Westbrook
S. Pasricha
AAML
25
3
0
03 Mar 2023
Enhancing Vulnerability Prioritization: Data-Driven Exploit Predictions
  with Community-Driven Insights
Enhancing Vulnerability Prioritization: Data-Driven Exploit Predictions with Community-Driven Insights
Jay Jacobs
Sasha Romanosky
Octavian Suciuo
Benjamin Edwards
Armin Sarabi
16
17
0
27 Feb 2023
Identifying Adversarially Attackable and Robust Samples
Identifying Adversarially Attackable and Robust Samples
Vyas Raina
Mark J. F. Gales
AAML
33
3
0
30 Jan 2023
A Comparative Study of Image Disguising Methods for Confidential
  Outsourced Learning
A Comparative Study of Image Disguising Methods for Confidential Outsourced Learning
Sagar Sharma
Yuechun Gu
Keke Chen
31
0
0
31 Dec 2022
GAN-based Domain Inference Attack
GAN-based Domain Inference Attack
Yuechun Gu
Keke Chen
15
11
0
22 Dec 2022
Enhancing Quantum Adversarial Robustness by Randomized Encodings
Enhancing Quantum Adversarial Robustness by Randomized Encodings
Weiyuan Gong
D. Yuan
Weikang Li
D. Deng
AAML
24
19
0
05 Dec 2022
Understanding the Vulnerability of Skeleton-based Human Activity
  Recognition via Black-box Attack
Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack
Yunfeng Diao
He Wang
Tianjia Shao
Yong-Liang Yang
Kun Zhou
David C. Hogg
Meng Wang
AAML
40
7
0
21 Nov 2022
Scaling Laws for Reward Model Overoptimization
Scaling Laws for Reward Model Overoptimization
Leo Gao
John Schulman
Jacob Hilton
ALM
41
481
0
19 Oct 2022
Machine Generated Text: A Comprehensive Survey of Threat Models and
  Detection Methods
Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods
Evan Crothers
Nathalie Japkowicz
H. Viktor
DeLMO
50
107
0
13 Oct 2022
Efficient Adversarial Training without Attacking: Worst-Case-Aware
  Robust Reinforcement Learning
Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning
Yongyuan Liang
Yanchao Sun
Ruijie Zheng
Furong Huang
OOD
AAML
OffRL
28
47
0
12 Oct 2022
On Optimal Learning Under Targeted Data Poisoning
On Optimal Learning Under Targeted Data Poisoning
Steve Hanneke
Amin Karbasi
Mohammad Mahmoody
Idan Mehalel
Shay Moran
AAML
FedML
36
7
0
06 Oct 2022
A Comprehensive Review of Trends, Applications and Challenges In
  Out-of-Distribution Detection
A Comprehensive Review of Trends, Applications and Challenges In Out-of-Distribution Detection
Navid Ghassemi
E. F. Ersi
AAML
OODD
25
4
0
26 Sep 2022
Data Provenance via Differential Auditing
Data Provenance via Differential Auditing
Xin Mu
Ming Pang
Feida Zhu
13
1
0
04 Sep 2022
Almost-Orthogonal Layers for Efficient General-Purpose Lipschitz
  Networks
Almost-Orthogonal Layers for Efficient General-Purpose Lipschitz Networks
Bernd Prach
Christoph H. Lampert
32
35
0
05 Aug 2022
Invariant Feature Learning for Generalized Long-Tailed Classification
Invariant Feature Learning for Generalized Long-Tailed Classification
Kaihua Tang
Mingyuan Tao
Jiaxin Qi
Zhenguang Liu
Hanwang Zhang
VLM
32
52
0
19 Jul 2022
Distance Learner: Incorporating Manifold Prior to Model Training
Distance Learner: Incorporating Manifold Prior to Model Training
Aditya Chetan
Nipun Kwatra
21
1
0
14 Jul 2022
Statistical Detection of Adversarial examples in Blockchain-based
  Federated Forest In-vehicle Network Intrusion Detection Systems
Statistical Detection of Adversarial examples in Blockchain-based Federated Forest In-vehicle Network Intrusion Detection Systems
I. Aliyu
Sélinde Van Engelenburg
Muhammed Muazu
Jinsul Kim
C. Lim
AAML
38
14
0
11 Jul 2022
123
Next