ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.06083
  4. Cited By
Towards Deep Learning Models Resistant to Adversarial Attacks

Towards Deep Learning Models Resistant to Adversarial Attacks

19 June 2017
A. Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
    SILM
    OOD
ArXivPDFHTML

Papers citing "Towards Deep Learning Models Resistant to Adversarial Attacks"

50 / 6,508 papers shown
Title
Aligning Multimodal LLM with Human Preference: A Survey
Aligning Multimodal LLM with Human Preference: A Survey
Tao Yu
Yuyao Zhang
Chaoyou Fu
Junkang Wu
Jinda Lu
...
Qingsong Wen
Z. Zhang
Yan Huang
Liang Wang
Tieniu Tan
188
2
0
18 Mar 2025
LipShiFT: A Certifiably Robust Shift-based Vision Transformer
LipShiFT: A Certifiably Robust Shift-based Vision Transformer
Rohan Menon
Nicola Franco
Stephan Günnemann
53
0
0
18 Mar 2025
AIGVE-Tool: AI-Generated Video Evaluation Toolkit with Multifaceted Benchmark
AIGVE-Tool: AI-Generated Video Evaluation Toolkit with Multifaceted Benchmark
Xinhao Xiang
Xiao Liu
Zizhong Li
Zhuosheng Liu
Jiawei Zhang
55
0
0
18 Mar 2025
RAT: Boosting Misclassification Detection Ability without Extra Data
RAT: Boosting Misclassification Detection Ability without Extra Data
Ge Yan
Tsui-Wei Weng
AAML
95
0
0
18 Mar 2025
GSBA$^K$: $top$-$K$ Geometric Score-based Black-box Attack
GSBAK^KK: toptoptop-KKK Geometric Score-based Black-box Attack
Md. Farhamdur Reza
Richeng Jin
Tianfu Wu
H. Dai
AAML
47
0
0
17 Mar 2025
Securing Virtual Reality Experiences: Unveiling and Tackling Cybersickness Attacks with Explainable AI
Securing Virtual Reality Experiences: Unveiling and Tackling Cybersickness Attacks with Explainable AI
Ripan Kumar Kundu
Matthew Denton
Genova Mongalo
Prasad Calyam
K. A. Hoque
AAML
46
0
0
17 Mar 2025
Safeguarding LLM Embeddings in End-Cloud Collaboration via Entropy-Driven Perturbation
Safeguarding LLM Embeddings in End-Cloud Collaboration via Entropy-Driven Perturbation
Shuaifan Jin
Xiaoyi Pang
Zhibo Wang
He Wang
Jiacheng Du
Jiahui Hu
Kui Ren
SILM
AAML
80
0
0
17 Mar 2025
Improving Generalization of Universal Adversarial Perturbation via Dynamic Maximin Optimization
Improving Generalization of Universal Adversarial Perturbation via Dynamic Maximin Optimization
Yuyao Zhang
Yingzhe Xu
Junyu Shi
L. Zhang
Shengshan Hu
Minghui Li
Yanjun Zhang
AAML
51
1
0
17 Mar 2025
Robust Dataset Distillation by Matching Adversarial Trajectories
Robust Dataset Distillation by Matching Adversarial Trajectories
Wei Lai
Tianyu Ding
ren dongdong
Lei Wang
Jing Huo
Yang Gao
Wenbin Li
AAML
DD
62
0
0
15 Mar 2025
Provenance Detection for AI-Generated Images: Combining Perceptual Hashing, Homomorphic Encryption, and AI Detection Models
Shree Singhi
Aayan Yadav
Aayush Gupta
Shariar Ebrahimi
Parisa Hassanizadeh
50
0
0
14 Mar 2025
Weakly Supervised Contrastive Adversarial Training for Learning Robust Features from Semi-supervised Data
Weakly Supervised Contrastive Adversarial Training for Learning Robust Features from Semi-supervised Data
Lilin Zhang
Chengpei Wu
Ning Yang
39
0
0
14 Mar 2025
Are Deep Speech Denoising Models Robust to Adversarial Noise?
Will Schwarzer
Philip S. Thomas
Andrea Fanelli
Xiaoyu Liu
54
0
0
14 Mar 2025
Tit-for-Tat: Safeguarding Large Vision-Language Models Against Jailbreak Attacks via Adversarial Defense
Shuyang Hao
Yijiao Wang
Bryan Hooi
Ming Yang
Jiaheng Liu
Chengcheng Tang
Zi Huang
Yujun Cai
AAML
54
0
0
14 Mar 2025
Align in Depth: Defending Jailbreak Attacks via Progressive Answer Detoxification
Yingjie Zhang
Tong Liu
Zhe Zhao
Guozhu Meng
Kai Chen
AAML
55
1
0
14 Mar 2025
Stabilizing Quantization-Aware Training by Implicit-Regularization on Hessian Matrix
Junbiao Pang
Tianyang Cai
44
1
0
14 Mar 2025
Enhancing Facial Privacy Protection via Weakening Diffusion Purification
Ali Salar
Qing Liu
Yingli Tian
Guoying Zhao
DiffM
56
0
0
13 Mar 2025
Identifying Trustworthiness Challenges in Deep Learning Models for Continental-Scale Water Quality Prediction
Xiaobo Xia
Xiaofeng Liu
Jiale Liu
K. Fang
Lu Lu
Samet Oymak
William S. Currie
Tongliang Liu
64
0
0
13 Mar 2025
Learning Interpretable Logic Rules from Deep Vision Models
Chuqin Geng
Yuhe Jiang
Ziyu Zhao
Haolin Ye
Zhaoyue Wang
X. Si
NAI
FAtt
VLM
74
1
0
13 Mar 2025
AdvPaint: Protecting Images from Inpainting Manipulation via Adversarial Attention Disruption
Joonsung Jeon
Woo Jae Kim
Suhyeon Ha
Sooel Son
Sung-eui Yoon
DiffM
AAML
63
0
0
13 Mar 2025
Robustness Tokens: Towards Adversarial Robustness of Transformers
Brian Pulfer
Yury Belousov
S. Voloshynovskiy
AAML
45
0
0
13 Mar 2025
Attacking Multimodal OS Agents with Malicious Image Patches
Lukas Aichberger
Alasdair Paren
Y. Gal
Philip Torr
Adel Bibi
AAML
62
3
0
13 Mar 2025
A Frustratingly Simple Yet Highly Effective Attack Baseline: Over 90% Success Rate Against the Strong Black-box Models of GPT-4.5/4o/o1
Zhaoyi Li
Xiaohan Zhao
Dong-Dong Wu
Jiacheng Cui
Zhiqiang Shen
AAML
VLM
75
1
0
13 Mar 2025
Enhancing Adversarial Example Detection Through Model Explanation
Qian Ma
Ziping Ye
AAML
67
0
0
12 Mar 2025
AdvAD: Exploring Non-Parametric Diffusion for Imperceptible Adversarial Attacks
Jin Li
Ziqiang He
Anwei Luo
Jian-Fang Hu
Zhong Wang
Xiangui Kang
DiffM
69
0
0
12 Mar 2025
Revisiting Backdoor Attacks on Time Series Classification in the Frequency Domain
Revisiting Backdoor Attacks on Time Series Classification in the Frequency Domain
Yuanmin Huang
Mi Zhang
Zhaoxiang Wang
Wenxuan Li
Min Yang
AAML
AI4TS
59
0
0
12 Mar 2025
All Your Knowledge Belongs to Us: Stealing Knowledge Graphs via Reasoning APIs
Zhaohan Xi
65
0
0
12 Mar 2025
Tangentially Aligned Integrated Gradients for User-Friendly Explanations
Lachlan Simpson
Federico Costanza
Kyle Millar
A. Cheng
Cheng-Chew Lim
Hong-Gunn Chew
FAtt
86
2
0
11 Mar 2025
FairDeFace: Evaluating the Fairness and Adversarial Robustness of Face Obfuscation Methods
Seyyed Mohammad Sadegh Moosavi Khorzooghi
Poojitha Thota
Mohit Singhal
Abolfazl Asudeh
Gautam Das
Shirin Nilizadeh
AAML
50
0
0
11 Mar 2025
Efficient and Accurate Estimation of Lipschitz Constants for Hybrid Quantum-Classical Decision Models
Sajjad Hashemian
Mohammad Saeed Arvenaghi
53
0
0
11 Mar 2025
Seal Your Backdoor with Variational Defense
Ivan Sabolić
Matej Grcić
Sinisa Segvic
AAML
192
0
0
11 Mar 2025
Adv-CPG: A Customized Portrait Generation Framework with Facial Adversarial Attacks
Junying Wang
Hongyuan Zhang
Yuan Yuan
AAML
PICV
80
0
0
11 Mar 2025
Utilizing Jailbreak Probability to Attack and Safeguard Multimodal LLMs
Wenzhuo Xu
Zhipeng Wei
Xiongtao Sun
Deyue Zhang
Dongdong Yang
Quanchen Zou
Xinming Zhang
AAML
52
0
0
10 Mar 2025
Runtime Detection of Adversarial Attacks in AI Accelerators Using Performance Counters
Habibur Rahaman
Atri Chatterjee
Swarup Bhunia
59
0
0
10 Mar 2025
Trustworthy Machine Learning via Memorization and the Granular Long-Tail: A Survey on Interactions, Tradeoffs, and Beyond
Qiongxiu Li
Xiaoyu Luo
Yiyi Chen
Johannes Bjerva
48
0
0
10 Mar 2025
Non-vacuous Generalization Bounds for Deep Neural Networks without any modification to the trained models
Khoat Than
Dat Phan
BDL
AAML
VLM
60
0
0
10 Mar 2025
Strengthening the Internal Adversarial Robustness in Lifted Neural Networks
Christopher Zach
AAML
51
0
0
10 Mar 2025
Breaking the Limits of Quantization-Aware Defenses: QADT-R for Robustness Against Patch-Based Adversarial Attacks in QNNs
Amira Guesmi
B. Ouni
Muhammad Shafique
MQ
AAML
36
0
0
10 Mar 2025
Life-Cycle Routing Vulnerabilities of LLM Router
Qiqi Lin
Xiaoyang Ji
Shengfang Zhai
Qingni Shen
Zhi-Li Zhang
Yuejian Fang
Yansong Gao
AAML
62
1
0
09 Mar 2025
MMARD: Improving the Min-Max Optimization Process in Adversarial Robustness Distillation
Yuzheng Wang
Zhaoyu Chen
Dingkang Yang
Yuanhang Wang
Lizhe Qi
AAML
69
0
0
09 Mar 2025
Long-tailed Adversarial Training with Self-Distillation
Seungju Cho
Hongsin Lee
Changick Kim
AAML
TTA
218
0
0
09 Mar 2025
Boosting the Local Invariance for Better Adversarial Transferability
Bohan Liu
Xiaosen Wang
AAML
65
0
0
08 Mar 2025
Exploring Adversarial Transferability between Kolmogorov-arnold Networks
Exploring Adversarial Transferability between Kolmogorov-arnold Networks
Songping Wang
Xinquan Yue
Yueming Lyu
Caifeng Shan
AAML
74
1
0
08 Mar 2025
Poisoned-MRAG: Knowledge Poisoning Attacks to Multimodal Retrieval Augmented Generation
Yinuo Liu
Zenghui Yuan
Guiyao Tie
Jiawen Shi
Lichao Sun
Lichao Sun
Neil Zhenqiang Gong
48
1
0
08 Mar 2025
Using Mechanistic Interpretability to Craft Adversarial Attacks against Large Language Models
Using Mechanistic Interpretability to Craft Adversarial Attacks against Large Language Models
Thomas Winninger
Boussad Addad
Katarzyna Kapusta
AAML
68
0
0
08 Mar 2025
Robust Intrusion Detection System with Explainable Artificial Intelligence
Betül Güvenç Paltun
Ramin Fuladi
Rim El Malki
AAML
56
0
0
07 Mar 2025
Anti-Diffusion: Preventing Abuse of Modifications of Diffusion-Based Models
Zheng Li
Liangbin Xie
Jiantao Zhou
Xintao Wang
Haiwei Wu
Jinyu Tian
39
0
0
07 Mar 2025
Generalizable Image Repair for Robust Visual Autonomous Racing
Carson Sobolewski
Zhenjiang Mao
Kshitij Vejre
Ivan Ruchkin
52
0
0
07 Mar 2025
Scale-Invariant Adversarial Attack against Arbitrary-scale Super-resolution
Yihao Huang
Xin Luo
Yihao Huang
Felix Juefei-Xu
Xiaojun Jia
Weikai Miao
G. Pu
Yang Liu
55
1
0
06 Mar 2025
Provable Robust Overfitting Mitigation in Wasserstein Distributionally Robust Optimization
Shuang Liu
Yihan Wang
Yifan Zhu
Yibo Miao
Xiao-Shan Gao
66
0
0
06 Mar 2025
Energy-Latency Attacks: A New Adversarial Threat to Deep Learning
H. B. Meftah
W. Hamidouche
Sid Ahmed Fezza
Olivier Déforges
AAML
48
0
0
06 Mar 2025
Previous
12345...129130131
Next