ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.6572
  4. Cited By
Explaining and Harnessing Adversarial Examples
v1v2v3 (latest)

Explaining and Harnessing Adversarial Examples

20 December 2014
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
    AAMLGAN
ArXiv (abs)PDFHTML

Papers citing "Explaining and Harnessing Adversarial Examples"

50 / 8,334 papers shown
Title
Defense Against Model Stealing Based on Account-Aware Distribution Discrepancy
Defense Against Model Stealing Based on Account-Aware Distribution Discrepancy
Jian-Ping Mei
Weibin Zhang
Jie Chen
Xinyu Zhang
Tiantian Zhu
AAML
89
0
0
16 Mar 2025
Robust Dataset Distillation by Matching Adversarial Trajectories
Robust Dataset Distillation by Matching Adversarial Trajectories
Wei Lai
Tianyu Ding
ren dongdong
Lei Wang
Jing Huo
Yang Gao
Wenbin Li
AAMLDD
102
0
0
15 Mar 2025
Stabilizing Quantization-Aware Training by Implicit-Regularization on Hessian Matrix
Junbiao Pang
Tianyang Cai
132
1
0
14 Mar 2025
Are Deep Speech Denoising Models Robust to Adversarial Noise?
Will Schwarzer
Philip S. Thomas
Andrea Fanelli
Xiaoyu Liu
75
0
0
14 Mar 2025
Weakly Supervised Contrastive Adversarial Training for Learning Robust Features from Semi-supervised Data
Weakly Supervised Contrastive Adversarial Training for Learning Robust Features from Semi-supervised Data
Lilin Zhang
Chengpei Wu
Ning Yang
101
0
0
14 Mar 2025
Align in Depth: Defending Jailbreak Attacks via Progressive Answer Detoxification
Yingjie Zhang
Tong Liu
Zhe Zhao
Guozhu Meng
Kai Chen
AAML
105
1
0
14 Mar 2025
reWordBench: Benchmarking and Improving the Robustness of Reward Models with Transformed Inputs
reWordBench: Benchmarking and Improving the Robustness of Reward Models with Transformed Inputs
Zhaofeng Wu
Michihiro Yasunaga
Andrew Cohen
Yoon Kim
Asli Celikyilmaz
Marjan Ghazvininejad
90
3
0
14 Mar 2025
Do computer vision foundation models learn the low-level characteristics of the human visual system?
Do computer vision foundation models learn the low-level characteristics of the human visual system?
Yancheng Cai
Fei Yin
Dounia Hammou
Rafal Mantiuk
VLM
Presented at ResearchTrend Connect | VLM on 14 Mar 2025
225
2
0
13 Mar 2025
Robustness Tokens: Towards Adversarial Robustness of Transformers
Brian Pulfer
Yury Belousov
S. Voloshynovskiy
AAML
85
0
0
13 Mar 2025
AdvPaint: Protecting Images from Inpainting Manipulation via Adversarial Attention Disruption
Joonsung Jeon
Woo Jae Kim
Suhyeon Ha
Sooel Son
Sung-eui Yoon
DiffMAAML
144
2
0
13 Mar 2025
Efficient Reachability Analysis for Convolutional Neural Networks Using Hybrid Zonotopes
Yuhao Zhang
Xiangru Xu
AAML
87
0
0
13 Mar 2025
Attacking Multimodal OS Agents with Malicious Image Patches
Lukas Aichberger
Alasdair Paren
Y. Gal
Philip Torr
Adel Bibi
AAML
121
5
0
13 Mar 2025
OODD: Test-time Out-of-Distribution Detection with Dynamic Dictionary
Yifeng Yang
Lin Zhu
Zewen Sun
Hengyu Liu
Qinying Gu
Nanyang Ye
OODD
111
1
0
13 Mar 2025
Enhancing Adversarial Example Detection Through Model Explanation
Qian Ma
Ziping Ye
AAML
100
0
0
12 Mar 2025
AdvAD: Exploring Non-Parametric Diffusion for Imperceptible Adversarial Attacks
Jin Li
Ziqiang He
Anwei Luo
Jian-Fang Hu
Zhong Wang
Xiangui Kang
DiffM
123
0
0
12 Mar 2025
Revealing Unintentional Information Leakage in Low-Dimensional Facial Portrait Representations
Kathleen Anderson
Thomas Martinetz
CVBM
120
0
0
12 Mar 2025
Revisiting Backdoor Attacks on Time Series Classification in the Frequency Domain
Revisiting Backdoor Attacks on Time Series Classification in the Frequency Domain
Yuanmin Huang
Mi Zhang
Zhaoxiang Wang
Wenxuan Li
Min Yang
AAMLAI4TS
100
1
0
12 Mar 2025
FairDeFace: Evaluating the Fairness and Adversarial Robustness of Face Obfuscation Methods
Seyyed Mohammad Sadegh Moosavi Khorzooghi
Poojitha Thota
Mohit Singhal
Abolfazl Asudeh
Gautam Das
Shirin Nilizadeh
AAML
73
0
0
11 Mar 2025
Adv-CPG: A Customized Portrait Generation Framework with Facial Adversarial Attacks
Junying Wang
Hongyuan Zhang
Yuan Yuan
AAMLPICV
137
2
0
11 Mar 2025
A Grey-box Text Attack Framework using Explainable AI
Esther Chiramal
Kelvin Soh Boon Kai
AAMLSILM
96
0
0
11 Mar 2025
Utilizing Jailbreak Probability to Attack and Safeguard Multimodal LLMs
Wenzhuo Xu
Zhipeng Wei
Xiongtao Sun
Deyue Zhang
Dongdong Yang
Quanchen Zou
Xinming Zhang
AAML
90
0
0
10 Mar 2025
Trustworthy Machine Learning via Memorization and the Granular Long-Tail: A Survey on Interactions, Tradeoffs, and Beyond
Qiongxiu Li
Xiaoyu Luo
Yiyi Chen
Johannes Bjerva
239
2
0
10 Mar 2025
MIGA: Mutual Information-Guided Attack on Denoising Models for Semantic Manipulation
Guanghao Li
Mingzhi Chen
Hao Yu
Shuting Dong
Wenhao Jiang
Ming Tang
Chun Yuan
DiffMAAML
84
0
0
10 Mar 2025
Breaking the Limits of Quantization-Aware Defenses: QADT-R for Robustness Against Patch-Based Adversarial Attacks in QNNs
Amira Guesmi
B. Ouni
Muhammad Shafique
MQAAML
132
0
0
10 Mar 2025
MMARD: Improving the Min-Max Optimization Process in Adversarial Robustness Distillation
Yuzheng Wang
Zhaoyu Chen
Dingkang Yang
Yuanhang Wang
Lizhe Qi
AAML
147
0
0
09 Mar 2025
Life-Cycle Routing Vulnerabilities of LLM Router
Qiqi Lin
Xiaoyang Ji
Shengfang Zhai
Qingni Shen
Zhi-Li Zhang
Yuejian Fang
Yansong Gao
AAML
90
1
0
09 Mar 2025
Long-tailed Adversarial Training with Self-Distillation
Seungju Cho
Hongsin Lee
Changick Kim
AAMLTTA
498
0
0
09 Mar 2025
Exploring Adversarial Transferability between Kolmogorov-arnold Networks
Exploring Adversarial Transferability between Kolmogorov-arnold Networks
Songping Wang
Xinquan Yue
Yueming Lyu
Caifeng Shan
AAML
136
2
0
08 Mar 2025
Boosting the Local Invariance for Better Adversarial Transferability
Bohan Liu
Xiaosen Wang
AAML
157
0
0
08 Mar 2025
Using Mechanistic Interpretability to Craft Adversarial Attacks against Large Language Models
Using Mechanistic Interpretability to Craft Adversarial Attacks against Large Language Models
Thomas Winninger
Boussad Addad
Katarzyna Kapusta
AAML
137
1
0
08 Mar 2025
Robust Intrusion Detection System with Explainable Artificial Intelligence
Betül Güvenç Paltun
Ramin Fuladi
Rim El Malki
AAML
76
0
0
07 Mar 2025
phepy: Visual Benchmarks and Improvements for Out-of-Distribution Detectors
Juniper Tyree
Andreas Rupp
Petri S. Clusius
Michael Boy
OODD
94
0
0
07 Mar 2025
Generalizable Image Repair for Robust Visual Autonomous Racing
Carson Sobolewski
Zhenjiang Mao
Kshitij Vejre
Ivan Ruchkin
87
0
0
07 Mar 2025
Poisoning Bayesian Inference via Data Deletion and Replication
Matthieu Carreau
Roi Naveiro
William N. Caballero
AAMLKELM
93
1
0
06 Mar 2025
Provable Robust Overfitting Mitigation in Wasserstein Distributionally Robust Optimization
Shuang Liu
Yihan Wang
Yifan Zhu
Yibo Miao
Xiao-Shan Gao
158
0
0
06 Mar 2025
Scale-Invariant Adversarial Attack against Arbitrary-scale Super-resolution
Yihao Huang
Xin Luo
Yihao Huang
Felix Juefei-Xu
Xiaojun Jia
Weikai Miao
G. Pu
Yang Liu
122
2
0
06 Mar 2025
Guiding LLMs to Generate High-Fidelity and High-Quality Counterfactual Explanations for Text Classification
Van Bach Nguyen
C. Seifert
Jorg Schlotterer
BDL
129
0
0
06 Mar 2025
Energy-Latency Attacks: A New Adversarial Threat to Deep Learning
H. B. Meftah
W. Hamidouche
Sid Ahmed Fezza
Olivier Déforges
AAML
70
0
0
06 Mar 2025
Dynamic-KGQA: A Scalable Framework for Generating Adaptive Question Answering Datasets
Preetam Prabhu Srikar Dammu
Himanshu Naidu
Chirag Shah
165
1
0
06 Mar 2025
The Challenge of Identifying the Origin of Black-Box Large Language Models
Ziqing Yang
Yixin Wu
Yun Shen
Wei Dai
Michael Backes
Yang Zhang
AAML
79
1
0
06 Mar 2025
When Claims Evolve: Evaluating and Enhancing the Robustness of Embedding Models Against Misinformation Edits
Jabez Magomere
Emanuele La Malfa
Manuel Tonneau
Ashkan Kazemi
Scott A. Hale
KELM
175
1
0
05 Mar 2025
Towards Effective and Sparse Adversarial Attack on Spiking Neural Networks via Breaking Invisible Surrogate Gradients
Li Lun
Kunyu Feng
Qinglong Ni
Ling Liang
Yuan Wang
Ying Li
Dunshan Yu
Xiaoxin Cui
AAML
115
0
0
05 Mar 2025
Predicting Practically? Domain Generalization for Predictive Analytics in Real-world Environments
Hanyu Duan
Yi Yang
Ahmed Abbasi
Kar Yan Tam
OOD
182
0
0
05 Mar 2025
Adversarial Example Based Fingerprinting for Robust Copyright Protection in Split Learning
Zhangting Lin
Mingfu Xue
Kewei Chen
Wen Liu
Xiang Gao
L. Zhang
Jian Wang
Yushu Zhang
77
0
0
05 Mar 2025
Task-Agnostic Attacks Against Vision Foundation Models
Brian Pulfer
Yury Belousov
Vitaliy Kinakh
Teddy Furon
S. Voloshynovskiy
AAML
111
0
0
05 Mar 2025
LLM-Safety Evaluations Lack Robustness
Tim Beyer
Sophie Xhonneux
Simon Geisler
Gauthier Gidel
Leo Schwinn
Stephan Günnemann
ALMELM
485
2
0
04 Mar 2025
One Stone, Two Birds: Enhancing Adversarial Defense Through the Lens of Distributional Discrepancy
One Stone, Two Birds: Enhancing Adversarial Defense Through the Lens of Distributional Discrepancy
Jiacheng Zhang
Benjamin I. P. Rubinstein
Jing Zhang
Feng Liu
131
0
0
04 Mar 2025
Words or Vision: Do Vision-Language Models Have Blind Faith in Text?
Ailin Deng
Tri Cao
Zhirui Chen
Bryan Hooi
VLM
137
3
0
04 Mar 2025
AutoAdvExBench: Benchmarking autonomous exploitation of adversarial example defenses
Nicholas Carlini
Javier Rando
Edoardo Debenedetti
Milad Nasr
F. Tramèr
AAMLELM
92
3
0
03 Mar 2025
Transformer Meets Twicing: Harnessing Unattended Residual Information
Laziz U. Abdullaev
Tan M. Nguyen
144
3
0
02 Mar 2025
Previous
123...678...165166167
Next