ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.6572
  4. Cited By
Explaining and Harnessing Adversarial Examples

Explaining and Harnessing Adversarial Examples

20 December 2014
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
    AAML
    GAN
ArXivPDFHTML

Papers citing "Explaining and Harnessing Adversarial Examples"

50 / 3,744 papers shown
Title
Struggle with Adversarial Defense? Try Diffusion
Struggle with Adversarial Defense? Try Diffusion
Yujie Li
Yanbin Wang
Haitao Xu
Bin Liu
Jianguo Sun
Zhenhao Guo
Wenrui Ma
DiffM
40
1
0
12 Apr 2024
Adversarial purification for no-reference image-quality metrics:
  applicability study and new methods
Adversarial purification for no-reference image-quality metrics: applicability study and new methods
Aleksandr Gushchin
Anna Chistyakova
Vladislav Minashkin
Anastasia Antsiferova
D. Vatolin
51
2
0
10 Apr 2024
BruSLeAttack: A Query-Efficient Score-Based Black-Box Sparse Adversarial
  Attack
BruSLeAttack: A Query-Efficient Score-Based Black-Box Sparse Adversarial Attack
Viet Vo
Ehsan Abbasnejad
Damith C. Ranasinghe
AAML
51
5
0
08 Apr 2024
CANEDERLI: On The Impact of Adversarial Training and Transferability on
  CAN Intrusion Detection Systems
CANEDERLI: On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems
Francesco Marchiori
Mauro Conti
AAML
29
0
0
06 Apr 2024
Structured Gradient-based Interpretations via Norm-Regularized
  Adversarial Training
Structured Gradient-based Interpretations via Norm-Regularized Adversarial Training
Shizhan Gong
Qi Dou
Farzan Farnia
FAtt
50
2
0
06 Apr 2024
Goal-guided Generative Prompt Injection Attack on Large Language Models
Goal-guided Generative Prompt Injection Attack on Large Language Models
Chong Zhang
Mingyu Jin
Qinkai Yu
Chengzhi Liu
Haochen Xue
Xiaobo Jin
AAML
SILM
52
13
0
06 Apr 2024
Evaluating Adversarial Robustness: A Comparison Of FGSM, Carlini-Wagner
  Attacks, And The Role of Distillation as Defense Mechanism
Evaluating Adversarial Robustness: A Comparison Of FGSM, Carlini-Wagner Attacks, And The Role of Distillation as Defense Mechanism
Trilokesh Ranjan Sarkar
Nilanjan Das
Pralay Sankar Maitra
Bijoy Some
Ritwik Saha
Orijita Adhikary
Bishal Bose
Jaydip Sen
AAML
24
0
0
05 Apr 2024
FACTUAL: A Novel Framework for Contrastive Learning Based Robust SAR
  Image Classification
FACTUAL: A Novel Framework for Contrastive Learning Based Robust SAR Image Classification
Xu Wang
Tian Ye
Rajgopal Kannan
Viktor Prasanna
AAML
40
1
0
04 Apr 2024
One Noise to Rule Them All: Multi-View Adversarial Attacks with
  Universal Perturbation
One Noise to Rule Them All: Multi-View Adversarial Attacks with Universal Perturbation
Mehmet Ergezer
Phat Duong
Christian Green
Tommy Nguyen
Abdurrahman Zeybey
AAML
44
2
0
02 Apr 2024
Evaluating Large Language Models Using Contrast Sets: An Experimental
  Approach
Evaluating Large Language Models Using Contrast Sets: An Experimental Approach
Manish Sanwal
30
5
0
02 Apr 2024
Benchmarking the Robustness of Temporal Action Detection Models Against
  Temporal Corruptions
Benchmarking the Robustness of Temporal Action Detection Models Against Temporal Corruptions
Runhao Zeng
Xiaoyong Chen
Jiaming Liang
Huisi Wu
Guangzhong Cao
Yong Guo
AAML
46
4
0
29 Mar 2024
Genos: General In-Network Unsupervised Intrusion Detection by Rule
  Extraction
Genos: General In-Network Unsupervised Intrusion Detection by Rule Extraction
Ruoyu Li
Qing Li
Yu Zhang
Dan Zhao
Xi Xiao
Yong-jia Jiang
45
3
0
28 Mar 2024
Bidirectional Consistency Models
Bidirectional Consistency Models
Liangchen Li
Jiajun He
DiffM
72
12
0
26 Mar 2024
Convection-Diffusion Equation: A Theoretically Certified Framework for
  Neural Networks
Convection-Diffusion Equation: A Theoretically Certified Framework for Neural Networks
Tangjun Wang
Chenglong Bao
Zuoqiang Shi
DiffM
49
0
0
23 Mar 2024
DD-RobustBench: An Adversarial Robustness Benchmark for Dataset
  Distillation
DD-RobustBench: An Adversarial Robustness Benchmark for Dataset Distillation
Yifan Wu
Jiawei Du
Ping Liu
Yuewei Lin
Wenqing Cheng
Wei Xu
DD
AAML
45
5
0
20 Mar 2024
Understanding and Improving Training-free Loss-based Diffusion Guidance
Understanding and Improving Training-free Loss-based Diffusion Guidance
Yifei Shen
Xinyang Jiang
Yezhen Wang
Yifan Yang
Dongqi Han
Dongsheng Li
FaML
39
6
0
19 Mar 2024
Sim2Real in Reconstructive Spectroscopy: Deep Learning with Augmented
  Device-Informed Data Simulation
Sim2Real in Reconstructive Spectroscopy: Deep Learning with Augmented Device-Informed Data Simulation
Jiyi Chen
Pengyu Li
Yutong Wang
Pei-Cheng Ku
Qing Qu
35
3
0
19 Mar 2024
ADAPT to Robustify Prompt Tuning Vision Transformers
ADAPT to Robustify Prompt Tuning Vision Transformers
Masih Eskandar
Tooba Imtiaz
Zifeng Wang
Jennifer Dy
VPVLM
VLM
AAML
38
0
0
19 Mar 2024
Robust Overfitting Does Matter: Test-Time Adversarial Purification With
  FGSM
Robust Overfitting Does Matter: Test-Time Adversarial Purification With FGSM
Linyu Tang
Lei Zhang
AAML
35
3
0
18 Mar 2024
Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
Ziqi Zhou
Minghui Li
Wei Liu
Shengshan Hu
Yechao Zhang
Wei Wan
Lulu Xue
Leo Yu Zhang
Dezhong Yao
Hai Jin
SILM
AAML
55
9
0
16 Mar 2024
Benchmarking Zero-Shot Robustness of Multimodal Foundation Models: A
  Pilot Study
Benchmarking Zero-Shot Robustness of Multimodal Foundation Models: A Pilot Study
Chenguang Wang
Ruoxi Jia
Xin Liu
Dawn Song
VLM
29
7
0
15 Mar 2024
Towards Adversarially Robust Dataset Distillation by Curvature Regularization
Towards Adversarially Robust Dataset Distillation by Curvature Regularization
Eric Xue
Yijiang Li
Haoyang Liu
Yifan Shen
Haohan Wang
Haohan Wang
DD
72
8
0
15 Mar 2024
Approximate Nullspace Augmented Finetuning for Robust Vision Transformers
Approximate Nullspace Augmented Finetuning for Robust Vision Transformers
Haoyang Liu
Aditya Singh
Yijiang Li
Haohan Wang
AAML
ViT
48
1
0
15 Mar 2024
Counter-Samples: A Stateless Strategy to Neutralize Black Box
  Adversarial Attacks
Counter-Samples: A Stateless Strategy to Neutralize Black Box Adversarial Attacks
Roey Bokobza
Yisroel Mirsky
AAML
38
0
0
14 Mar 2024
Specification Overfitting in Artificial Intelligence
Specification Overfitting in Artificial Intelligence
Benjamin Roth
Pedro Henrique Luz de Araujo
Yuxi Xia
Saskia Kaltenbrunner
Christoph Korab
58
0
0
13 Mar 2024
Improving deep learning with prior knowledge and cognitive models: A
  survey on enhancing explainability, adversarial robustness and zero-shot
  learning
Improving deep learning with prior knowledge and cognitive models: A survey on enhancing explainability, adversarial robustness and zero-shot learning
F. Mumuni
A. Mumuni
AAML
42
5
0
11 Mar 2024
Dynamic Perturbation-Adaptive Adversarial Training on Medical Image
  Classification
Dynamic Perturbation-Adaptive Adversarial Training on Medical Image Classification
Shuai Li
Xiaoguang Ma
Shancheng Jiang
Lu Meng
AAML
OOD
35
0
0
11 Mar 2024
epsilon-Mesh Attack: A Surface-based Adversarial Point Cloud Attack for
  Facial Expression Recognition
epsilon-Mesh Attack: A Surface-based Adversarial Point Cloud Attack for Facial Expression Recognition
Batuhan Cengiz
Mert Gulsen
Y. Sahin
Gözde B. Ünal
3DPC
AAML
36
0
0
11 Mar 2024
Attacking Transformers with Feature Diversity Adversarial Perturbation
Attacking Transformers with Feature Diversity Adversarial Perturbation
Chenxing Gao
Hang Zhou
Junqing Yu
Yuteng Ye
Jiale Cai
Junle Wang
Wei Yang
AAML
37
3
0
10 Mar 2024
Are Classification Robustness and Explanation Robustness Really Strongly
  Correlated? An Analysis Through Input Loss Landscape
Are Classification Robustness and Explanation Robustness Really Strongly Correlated? An Analysis Through Input Loss Landscape
Tiejin Chen
Wenwang Huang
Linsey Pang
Dongsheng Luo
Hua Wei
OOD
54
0
0
09 Mar 2024
Adversarial Sparse Teacher: Defense Against Distillation-Based Model
  Stealing Attacks Using Adversarial Examples
Adversarial Sparse Teacher: Defense Against Distillation-Based Model Stealing Attacks Using Adversarial Examples
Eda Yilmaz
H. Keles
AAML
26
2
0
08 Mar 2024
Fooling Neural Networks for Motion Forecasting via Adversarial Attacks
Fooling Neural Networks for Motion Forecasting via Adversarial Attacks
Edgar Medina
Leyong Loh
AAML
37
0
0
07 Mar 2024
A Survey on Human-AI Teaming with Large Pre-Trained Models
A Survey on Human-AI Teaming with Large Pre-Trained Models
Vanshika Vats
Marzia Binta Nizam
Minghao Liu
Ziyuan Wang
Richard Ho
...
Celeste Shen
Rachel Shen
Nafisa Hussain
Kesav Ravichandran
James Davis
LM&MA
65
8
0
07 Mar 2024
Here Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications
Here Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications
Stav Cohen
Ron Bitton
Ben Nassi
40
19
0
05 Mar 2024
Adversarial Testing for Visual Grounding via Image-Aware Property
  Reduction
Adversarial Testing for Visual Grounding via Image-Aware Property Reduction
Zhiyuan Chang
Mingyang Li
Junjie Wang
Cheng Li
Boyu Wu
Fanjiang Xu
Qing Wang
AAML
41
0
0
02 Mar 2024
Training-set-free two-stage deep learning for spectroscopic data
  de-noising
Training-set-free two-stage deep learning for spectroscopic data de-noising
Dongchen Huang
Junde Liu
Tian Qian
Hongming Weng
36
0
0
29 Feb 2024
Catastrophic Overfitting: A Potential Blessing in Disguise
Catastrophic Overfitting: A Potential Blessing in Disguise
Mengnan Zhao
Lihe Zhang
Yuqiu Kong
Baocai Yin
AAML
56
1
0
28 Feb 2024
Understanding the Role of Pathways in a Deep Neural Network
Understanding the Role of Pathways in a Deep Neural Network
Lei Lyu
Chen Pang
Jihua Wang
47
3
0
28 Feb 2024
Adversarial Example Soups: Improving Transferability and Stealthiness for Free
Adversarial Example Soups: Improving Transferability and Stealthiness for Free
Bo Yang
Hengwei Zhang
Jin-dong Wang
Yulong Yang
Chenhao Lin
Chao Shen
Zhengyu Zhao
SILM
AAML
71
2
0
27 Feb 2024
Fast Adversarial Attacks on Language Models In One GPU Minute
Fast Adversarial Attacks on Language Models In One GPU Minute
Vinu Sankar Sadasivan
Shoumik Saha
Gaurang Sriramanan
Priyatham Kattakinda
Atoosa Malemir Chegini
S. Feizi
MIALM
45
35
0
23 Feb 2024
Rethinking Invariance Regularization in Adversarial Training to Improve Robustness-Accuracy Trade-off
Rethinking Invariance Regularization in Adversarial Training to Improve Robustness-Accuracy Trade-off
Futa Waseda
Ching-Chun Chang
Isao Echizen
AAML
41
0
0
22 Feb 2024
Unleashing the Power of Imbalanced Modality Information for Multi-modal
  Knowledge Graph Completion
Unleashing the Power of Imbalanced Modality Information for Multi-modal Knowledge Graph Completion
Yichi Zhang
Zhuo Chen
Lei Liang
Hua-zeng Chen
Wen Zhang
61
4
0
22 Feb 2024
AttackGNN: Red-Teaming GNNs in Hardware Security Using Reinforcement
  Learning
AttackGNN: Red-Teaming GNNs in Hardware Security Using Reinforcement Learning
Vasudev Gohil
Satwik Patnaik
D. Kalathil
Jeyavijayan Rajendran
AAML
45
3
0
21 Feb 2024
Robustness of Deep Neural Networks for Micro-Doppler Radar
  Classification
Robustness of Deep Neural Networks for Micro-Doppler Radar Classification
Mikolaj Czerkawski
C. Clemente
C. Michie
Christos Tachtatzis
OOD
AAML
27
3
0
21 Feb 2024
QuanTest: Entanglement-Guided Testing of Quantum Neural Network Systems
QuanTest: Entanglement-Guided Testing of Quantum Neural Network Systems
Jinjing Shi
Zimeng Xiao
Heyuan Shi
Yu Jiang
Xuelong Li
AAML
51
2
0
20 Feb 2024
Defending Jailbreak Prompts via In-Context Adversarial Game
Defending Jailbreak Prompts via In-Context Adversarial Game
Yujun Zhou
Yufei Han
Haomin Zhuang
Kehan Guo
Zhenwen Liang
Hongyan Bao
Xiangliang Zhang
LLMAG
AAML
47
12
0
20 Feb 2024
Manipulating hidden-Markov-model inferences by corrupting batch data
Manipulating hidden-Markov-model inferences by corrupting batch data
William N. Caballero
Jose Manuel Camacho
Tahir Ekin
Roi Naveiro
AAML
33
1
0
19 Feb 2024
Theoretical Understanding of Learning from Adversarial Perturbations
Theoretical Understanding of Learning from Adversarial Perturbations
Soichiro Kumano
Hiroshi Kera
Toshihiko Yamasaki
AAML
53
1
0
16 Feb 2024
Soft Prompt Threats: Attacking Safety Alignment and Unlearning in Open-Source LLMs through the Embedding Space
Soft Prompt Threats: Attacking Safety Alignment and Unlearning in Open-Source LLMs through the Embedding Space
Leo Schwinn
David Dobre
Sophie Xhonneux
Gauthier Gidel
Stephan Gunnemann
AAML
51
38
0
14 Feb 2024
Tighter Bounds on the Information Bottleneck with Application to Deep
  Learning
Tighter Bounds on the Information Bottleneck with Application to Deep Learning
Nir Weingarten
Z. Yakhini
Moshe Butman
Ran Gilad-Bachrach
AAML
30
1
0
12 Feb 2024
Previous
123...789...737475
Next