ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.06083
  4. Cited By
Towards Deep Learning Models Resistant to Adversarial Attacks
v1v2v3v4 (latest)

Towards Deep Learning Models Resistant to Adversarial Attacks

19 June 2017
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
    SILMOOD
ArXiv (abs)PDFHTMLGithub (752★)

Papers citing "Towards Deep Learning Models Resistant to Adversarial Attacks"

50 / 6,612 papers shown
Title
Exploring the Robustness and Transferability of Patch-Based Adversarial Attacks in Quantized Neural Networks
Exploring the Robustness and Transferability of Patch-Based Adversarial Attacks in Quantized Neural Networks
Amira Guesmi
B. Ouni
Mohamed Bennai
AAML
144
0
0
22 Nov 2024
Adversarial Prompt Distillation for Vision-Language Models
Adversarial Prompt Distillation for Vision-Language Models
Lin Luo
Xin Wang
Bojia Zi
Shihao Zhao
Xingjun Ma
Yu-Gang Jiang
AAMLVLM
180
4
0
22 Nov 2024
Learning Fair Robustness via Domain Mixup
Learning Fair Robustness via Domain Mixup
Meiyu Zhong
Ravi Tandon
OOD
122
0
0
21 Nov 2024
Creating a Formally Verified Neural Network for Autonomous Navigation:
  An Experience Report
Creating a Formally Verified Neural Network for Autonomous Navigation: An Experience Report
Syed Ali Asadullah Bukhari
Thomas Flinkow
M. Inkarbekov
Barak A. Pearlmutter
Rosemary Monahan
139
0
0
21 Nov 2024
Rethinking the Intermediate Features in Adversarial Attacks: Misleading
  Robotic Models via Adversarial Distillation
Rethinking the Intermediate Features in Adversarial Attacks: Misleading Robotic Models via Adversarial Distillation
Ke Zhao
Huayang Huang
Miao Li
Yu Wu
AAML
114
1
0
21 Nov 2024
On the Fairness, Diversity and Reliability of Text-to-Image Generative Models
On the Fairness, Diversity and Reliability of Text-to-Image Generative Models
Jordan Vice
Naveed Akhtar
Leonid Sigal
Richard Hartley
Ajmal Mian
EGVM
139
0
0
21 Nov 2024
TAPT: Test-Time Adversarial Prompt Tuning for Robust Inference in
  Vision-Language Models
TAPT: Test-Time Adversarial Prompt Tuning for Robust Inference in Vision-Language Models
Xin Wang
Kai-xiang Chen
Jiaming Zhang
Jingjing Chen
Xingjun Ma
AAMLVPVLMVLM
148
3
0
20 Nov 2024
Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks
Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks
Yong Xie
Weijie Zheng
Hanxun Huang
Guangnan Ye
Xingjun Ma
AAML
170
1
0
20 Nov 2024
Exploring adversarial robustness of JPEG AI: methodology, comparison and new methods
Egor Kovalev
Georgii Bychkov
Khaled Abud
A. Gushchin
Anna Chistyakova
Sergey Lavrushkin
D. Vatolin
Anastasia Antsiferova
AAML
175
2
0
18 Nov 2024
Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
Taowen Wang
Dongfang Liu
James Liang
Wenhao Yang
Qifan Wang
Cheng Han
Jiebo Luo
Ruixiang Tang
Ruixiang Tang
AAML
182
6
0
18 Nov 2024
Conceptwm: A Diffusion Model Watermark for Concept Protection
Liangqi Lei
Keke Gai
Jing Yu
Liehuang Zhu
Qi Wu
WIGM
168
2
0
18 Nov 2024
CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization
CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization
Nay Myat Min
Long H. Pham
Yige Li
Jun Sun
AAML
152
5
0
18 Nov 2024
SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach
Ruoxi Sun
Jiamin Chang
Hammond Pearce
Chaowei Xiao
B. Li
Qi Wu
Surya Nepal
Minhui Xue
109
0
0
17 Nov 2024
Llama Guard 3 Vision: Safeguarding Human-AI Image Understanding
  Conversations
Llama Guard 3 Vision: Safeguarding Human-AI Image Understanding Conversations
Jianfeng Chi
Ujjwal Karn
Hongyuan Zhan
Eric Michael Smith
Javier Rando
Yiming Zhang
Kate Plawiak
Zacharie Delpierre Coudert
Kartikeya Upasani
Mahesh Pasupuleti
MLLM3DH
124
32
0
15 Nov 2024
Are nuclear masks all you need for improved out-of-domain
  generalisation? A closer look at cancer classification in histopathology
Are nuclear masks all you need for improved out-of-domain generalisation? A closer look at cancer classification in histopathology
Dhananjay Tomar
Alexander Binder
Andreas Kleppe
79
0
0
14 Nov 2024
Enhancing generalization in high energy physics using white-box
  adversarial attacks
Enhancing generalization in high energy physics using white-box adversarial attacks
Franck Rothen
Samuel Klein
Matthew Leigh
T. Golling
AAML
55
1
0
14 Nov 2024
Transferable Adversarial Attacks against ASR
Transferable Adversarial Attacks against ASR
Xiaoxue Gao
Zexin Li
Yiming Chen
Cong Liu
Haoyang Li
AAML
59
1
0
14 Nov 2024
New Emerged Security and Privacy of Pre-trained Model: a Survey and
  Outlook
New Emerged Security and Privacy of Pre-trained Model: a Survey and Outlook
Meng Yang
Tianqing Zhu
Chi Liu
Wanlei Zhou
Shui Yu
Philip S. Yu
AAMLELMPILM
112
1
0
12 Nov 2024
A Survey on Adversarial Machine Learning for Code Data: Realistic
  Threats, Countermeasures, and Interpretations
A Survey on Adversarial Machine Learning for Code Data: Realistic Threats, Countermeasures, and Interpretations
Yulong Yang
Haoran Fan
Chenhao Lin
Qian Li
Zhengyu Zhao
Chao Shen
Xiaohong Guan
AAML
75
0
0
12 Nov 2024
ProP: Efficient Backdoor Detection via Propagation Perturbation for
  Overparametrized Models
ProP: Efficient Backdoor Detection via Propagation Perturbation for Overparametrized Models
Tao Ren
Qiongxiu Li
AAML
77
0
0
11 Nov 2024
The Inherent Adversarial Robustness of Analog In-Memory Computing
The Inherent Adversarial Robustness of Analog In-Memory Computing
Corey Lammie
Julian Büchel
A. Vasilopoulos
Manuel Le Gallo
Abu Sebastian
AAML
122
2
0
11 Nov 2024
Computable Model-Independent Bounds for Adversarial Quantum Machine
  Learning
Computable Model-Independent Bounds for Adversarial Quantum Machine Learning
Bacui Li
T. Alpcan
Chandra Thapa
Udaya Parampalli
AAML
71
0
0
11 Nov 2024
Adversarial Detection with a Dynamically Stable System
Adversarial Detection with a Dynamically Stable System
Xiaowei Long
Jie Lin
Xiangyuan Yang
AAML
74
0
0
11 Nov 2024
Neural Fingerprints for Adversarial Attack Detection
Neural Fingerprints for Adversarial Attack Detection
Haim Fisher
Moni Shahar
Yehezkel S. Resheff
AAML
28
0
0
07 Nov 2024
Game-Theoretic Defenses for Robust Conformal Prediction Against Adversarial Attacks in Medical Imaging
Game-Theoretic Defenses for Robust Conformal Prediction Against Adversarial Attacks in Medical Imaging
Rui Luo
Jie Bao
Zhixin Zhou
Chuangyin Dang
MedImAAML
250
7
0
07 Nov 2024
Verification of Neural Networks against Convolutional Perturbations via Parameterised Kernels
Verification of Neural Networks against Convolutional Perturbations via Parameterised Kernels
Benedikt Brückner
Alessio Lomuscio
AAML
113
1
0
07 Nov 2024
Deferred Poisoning: Making the Model More Vulnerable via Hessian
  Singularization
Deferred Poisoning: Making the Model More Vulnerable via Hessian Singularization
Yuhao He
Jinyu Tian
Xianwei Zheng
Li Dong
Yuanman Li
L. Zhang
AAML
88
0
0
06 Nov 2024
Enhancing Adversarial Robustness via Uncertainty-Aware Distributional
  Adversarial Training
Enhancing Adversarial Robustness via Uncertainty-Aware Distributional Adversarial Training
Junhao Dong
Xinghua Qu
Zhiyuan Wang
Yew-Soon Ong
AAML
91
1
0
05 Nov 2024
Semantic-Aligned Adversarial Evolution Triangle for High-Transferability
  Vision-Language Attack
Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack
Xiaojun Jia
Sensen Gao
Qing Guo
Ke Ma
Yihao Huang
Simeng Qin
Yang Liu
Ivor Tsang Fellow
Xiaochun Cao
AAML
87
3
0
04 Nov 2024
User-wise Perturbations for User Identity Protection in EEG-Based BCIs
User-wise Perturbations for User Identity Protection in EEG-Based BCIs
Xiaoqing Chen
Siyang Li
Yunlu Tu
Ziwei Wang
Dongrui Wu
71
2
0
04 Nov 2024
Alignment-Based Adversarial Training (ABAT) for Improving the Robustness
  and Accuracy of EEG-Based BCIs
Alignment-Based Adversarial Training (ABAT) for Improving the Robustness and Accuracy of EEG-Based BCIs
Xiaoqing Chen
Ziwei Wang
Dongrui Wu
AAML
128
9
0
04 Nov 2024
Optimal Classification under Performative Distribution Shift
Optimal Classification under Performative Distribution Shift
Edwige Cyffers
Muni Sreenivas Pydi
Jamal Atif
Olivier Cappé
96
4
0
04 Nov 2024
Learning Where to Edit Vision Transformers
Learning Where to Edit Vision Transformers
Yunqiao Yang
Long-Kai Huang
Shengzhuang Chen
Kede Ma
Ying Wei
KELM
91
1
0
04 Nov 2024
UniGuard: Towards Universal Safety Guardrails for Jailbreak Attacks on Multimodal Large Language Models
UniGuard: Towards Universal Safety Guardrails for Jailbreak Attacks on Multimodal Large Language Models
Sejoon Oh
Yiqiao Jin
Megha Sharma
Donghyun Kim
Eric Ma
Gaurav Verma
Srijan Kumar
125
7
0
03 Nov 2024
Uncertainty-based Offline Variational Bayesian Reinforcement Learning
  for Robustness under Diverse Data Corruptions
Uncertainty-based Offline Variational Bayesian Reinforcement Learning for Robustness under Diverse Data Corruptions
Rui Yang
Jie Wang
Guoping Wu
Yangqiu Song
AAMLOffRL
140
3
0
01 Nov 2024
DeepCore: Simple Fingerprint Construction for Differentiating Homologous
  and Piracy Models
DeepCore: Simple Fingerprint Construction for Differentiating Homologous and Piracy Models
Haifeng Sun
Lan Zhang
Xiang-Yang Li
97
0
0
01 Nov 2024
ReMatching Dynamic Reconstruction Flow
ReMatching Dynamic Reconstruction Flow
Sara Oblak
Despoina Paschalidou
Sanja Fidler
Matan Atzmon
167
0
0
01 Nov 2024
Protecting Feed-Forward Networks from Adversarial Attacks Using
  Predictive Coding
Protecting Feed-Forward Networks from Adversarial Attacks Using Predictive Coding
Ehsan Ganjidoost
Jeff Orchard
AAML
47
0
0
31 Oct 2024
I Can Hear You: Selective Robust Training for Deepfake Audio Detection
I Can Hear You: Selective Robust Training for Deepfake Audio Detection
Zirui Zhang
Wei Hao
Aroon Sankoh
William Lin
Emanuel Mendiola-Ortiz
Junfeng Yang
Chengzhi Mao
AAML
68
3
0
31 Oct 2024
DiffPAD: Denoising Diffusion-based Adversarial Patch Decontamination
DiffPAD: Denoising Diffusion-based Adversarial Patch Decontamination
Jia Fu
Xiao Zhang
Sepideh Pashami
Fatemeh Rahimian
Anders Holst
DiffMAAML
82
0
0
31 Oct 2024
Noise as a Double-Edged Sword: Reinforcement Learning Exploits
  Randomized Defenses in Neural Networks
Noise as a Double-Edged Sword: Reinforcement Learning Exploits Randomized Defenses in Neural Networks
Steve Bakos
Pooria Madani
Heidar Davoudi
AAML
78
0
0
31 Oct 2024
GaussianMarker: Uncertainty-Aware Copyright Protection of 3D Gaussian
  Splatting
GaussianMarker: Uncertainty-Aware Copyright Protection of 3D Gaussian Splatting
Xiufeng Huang
Ruiqi Li
Yiu-ming Cheung
Ka Chun Cheung
Simon See
Renjie Wan
3DGS
94
5
0
31 Oct 2024
ARQ: A Mixed-Precision Quantization Framework for Accurate and Certifiably Robust DNNs
ARQ: A Mixed-Precision Quantization Framework for Accurate and Certifiably Robust DNNs
Yuchen Yang
Shubham Ugare
Yifan Zhao
Gagandeep Singh
Sasa Misailovic
MQ
92
0
0
31 Oct 2024
Keep on Swimming: Real Attackers Only Need Partial Knowledge of a
  Multi-Model System
Keep on Swimming: Real Attackers Only Need Partial Knowledge of a Multi-Model System
Julian Collado
Kevin Stangl
AAML
62
0
0
30 Oct 2024
Transformation-Invariant Learning and Theoretical Guarantees for OOD
  Generalization
Transformation-Invariant Learning and Theoretical Guarantees for OOD Generalization
Omar Montasser
Han Shao
Emmanuel Abbe
OOD
58
2
0
30 Oct 2024
ProTransformer: Robustify Transformers via Plug-and-Play Paradigm
ProTransformer: Robustify Transformers via Plug-and-Play Paradigm
Zhichao Hou
Weizhi Gao
Yuchen Shen
Feiyi Wang
Xiaorui Liu
VLM
70
2
0
30 Oct 2024
Effective and Efficient Adversarial Detection for Vision-Language Models
  via A Single Vector
Effective and Efficient Adversarial Detection for Vision-Language Models via A Single Vector
Youcheng Huang
Fengbin Zhu
Jingkun Tang
Pan Zhou
Wenqiang Lei
Jiancheng Lv
Tat-Seng Chua
AAML
69
4
0
30 Oct 2024
Geometry Cloak: Preventing TGS-based 3D Reconstruction from Copyrighted
  Images
Geometry Cloak: Preventing TGS-based 3D Reconstruction from Copyrighted Images
Qi Song
Ziyuan Luo
Ka Chun Cheung
Simon See
Renjie Wan
104
3
0
30 Oct 2024
One Prompt to Verify Your Models: Black-Box Text-to-Image Models Verification via Non-Transferable Adversarial Attacks
One Prompt to Verify Your Models: Black-Box Text-to-Image Models Verification via Non-Transferable Adversarial Attacks
Ji Guo
Wenbo Jiang
Rui Zhang
Guoming Lu
Hongwei Li
AAML
160
0
0
30 Oct 2024
FAIR-TAT: Improving Model Fairness Using Targeted Adversarial Training
FAIR-TAT: Improving Model Fairness Using Targeted Adversarial Training
Tejaswini Medi
Steffen Jung
Margret Keuper
AAML
94
3
0
30 Oct 2024
Previous
123...101112...131132133
Next