ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.6572
  4. Cited By
Explaining and Harnessing Adversarial Examples
v1v2v3 (latest)

Explaining and Harnessing Adversarial Examples

20 December 2014
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
    AAMLGAN
ArXiv (abs)PDFHTML

Papers citing "Explaining and Harnessing Adversarial Examples"

50 / 8,334 papers shown
Title
Fool the Stoplight: Realistic Adversarial Patch Attacks on Traffic Light Detectors
Svetlana Pavlitska
Jamie Robb
Nikolai Polley
Melih Yazgan
Johann Marius Zöllner
AAML
107
0
0
05 Jun 2025
Towards Better Generalization via Distributional Input Projection Network
Yifan Hao
Yanxin Lu
Xinwei Shen
Tong Zhang
100
0
0
05 Jun 2025
Neural Network Reprogrammability: A Unified Theme on Model Reprogramming, Prompt Tuning, and Prompt Instruction
Neural Network Reprogrammability: A Unified Theme on Model Reprogramming, Prompt Tuning, and Prompt Instruction
Zesheng Ye
C. Cai
Ruijiang Dong
Jianzhong Qi
Lei Feng
Pin-Yu Chen
Feng Liu
210
0
0
05 Jun 2025
Robust Few-Shot Vision-Language Model Adaptation
Hanxin Wang
Tian Liu
Shu Kong
VLM
121
0
0
05 Jun 2025
Sylva: Tailoring Personalized Adversarial Defense in Pre-trained Models via Collaborative Fine-tuning
Sylva: Tailoring Personalized Adversarial Defense in Pre-trained Models via Collaborative Fine-tuning
Tianyu Qi
Lei Xue
Yufeng Zhan
Xiaobo Ma
AAML
33
0
0
04 Jun 2025
Higher-Order Singular-Value Derivatives of Rectangular Real Matrices
Higher-Order Singular-Value Derivatives of Rectangular Real Matrices
Róisín Luo
James McDermott
C. O'Riordan
44
0
0
04 Jun 2025
Privacy Leaks by Adversaries: Adversarial Iterations for Membership Inference Attack
Privacy Leaks by Adversaries: Adversarial Iterations for Membership Inference Attack
Jing Xue
Zhishen Sun
Haishan Ye
Luo Luo
Xiangyu Chang
Ivor Tsang
Guang Dai
MIACVMIALM
64
0
0
03 Jun 2025
How stealthy is stealthy? Studying the Efficacy of Black-Box Adversarial Attacks in the Real World
How stealthy is stealthy? Studying the Efficacy of Black-Box Adversarial Attacks in the Real World
Francesco Panebianco
Mario DÓnghia
Stefano Zanero aand Michele Carminati
AAML
27
0
0
03 Jun 2025
Attacking Attention of Foundation Models Disrupts Downstream Tasks
Attacking Attention of Foundation Models Disrupts Downstream Tasks
Hondamunige Prasanna Silva
Federico Becattini
Lorenzo Seidenari
AAML
27
0
0
03 Jun 2025
Urban Visibility Hotspots: Quantifying Building Vertex Visibility from Connected Vehicle Trajectories using Spatial Indexing
Urban Visibility Hotspots: Quantifying Building Vertex Visibility from Connected Vehicle Trajectories using Spatial Indexing
Artur Grigorev
Adriana-Simona Mihaita
38
0
0
03 Jun 2025
Tarallo: Evading Behavioral Malware Detectors in the Problem Space
Tarallo: Evading Behavioral Malware Detectors in the Problem Space
Gabriele Digregorio
Salvatore Maccarrone
Mario DÓnghia
Luigi Gallo
Michele Carminati
Mario Polino
S. Zanero
AAML
54
0
0
03 Jun 2025
MUC-G4: Minimal Unsat Core-Guided Incremental Verification for Deep Neural Network Compression
Jingyang Li
Guoqiang Li
22
0
0
03 Jun 2025
Gradient-Based Model Fingerprinting for LLM Similarity Detection and Family Classification
Gradient-Based Model Fingerprinting for LLM Similarity Detection and Family Classification
Zehao Wu
Yanjie Zhao
Haoyu Wang
69
0
0
02 Jun 2025
Silence is Golden: Leveraging Adversarial Examples to Nullify Audio Control in LDM-based Talking-Head Generation
Silence is Golden: Leveraging Adversarial Examples to Nullify Audio Control in LDM-based Talking-Head Generation
Yuan Gan
Jiaxu Miao
Yunze Wang
Yi Yang
AAMLDiffM
49
0
0
02 Jun 2025
Fighting Fire with Fire (F3): A Training-free and Efficient Visual Adversarial Example Purification Method in LVLMs
Fighting Fire with Fire (F3): A Training-free and Efficient Visual Adversarial Example Purification Method in LVLMs
Yudong Zhang
Ruobing Xie
Yiqing Huang
Jiansheng Chen
Xingwu Sun
Zhanhui Kang
Di Wang
Yu Wang
AAML
49
0
0
01 Jun 2025
CAPAA: Classifier-Agnostic Projector-Based Adversarial Attack
CAPAA: Classifier-Agnostic Projector-Based Adversarial Attack
Zhan Li
Mingyu Zhao
Xin Dong
Haibin Ling
Bingyao Huang
AAML
48
0
0
01 Jun 2025
The Security Threat of Compressed Projectors in Large Vision-Language Models
The Security Threat of Compressed Projectors in Large Vision-Language Models
Yudong Zhang
Ruobing Xie
Xingwu Sun
Jiansheng Chen
Zhanhui Kang
Di Wang
Yu Wang
14
0
0
31 May 2025
LoRA as a Flexible Framework for Securing Large Vision Systems
LoRA as a Flexible Framework for Securing Large Vision Systems
Richard E. Neddo
Sean Willis
Zander W. Blasingame
AAML
32
0
0
31 May 2025
TRAPDOC: Deceiving LLM Users by Injecting Imperceptible Phantom Tokens into Documents
TRAPDOC: Deceiving LLM Users by Injecting Imperceptible Phantom Tokens into Documents
Hyundong Jin
Sicheol Sung
Shinwoo Park
SeungYeop Baik
Yo-Sub Han
25
0
0
30 May 2025
Black-box Adversarial Attacks on CNN-based SLAM Algorithms
Black-box Adversarial Attacks on CNN-based SLAM Algorithms
M. Gkeka
Bowen Sun
Evgenia Smirni
C. Antonopoulos
S. Lalis
Nikolaos Bellas
AAML
27
0
0
30 May 2025
A Red Teaming Roadmap Towards System-Level Safety
A Red Teaming Roadmap Towards System-Level Safety
Zifan Wang
Christina Q. Knight
Jeremy Kritz
Willow Primack
Julian Michael
AAML
45
0
0
30 May 2025
Diffusion Guidance Is a Controllable Policy Improvement Operator
Diffusion Guidance Is a Controllable Policy Improvement Operator
Kevin Frans
Seohong Park
Pieter Abbeel
Sergey Levine
OffRL
70
0
0
29 May 2025
Adversarial Semantic and Label Perturbation Attack for Pedestrian Attribute Recognition
Adversarial Semantic and Label Perturbation Attack for Pedestrian Attribute Recognition
Weizhe Kong
Xiao Wang
Ruichong Gao
Chenglong Li
Yu Zhang
Xing Yang
Yaowei Wang
Jin Tang
AAML
64
0
0
29 May 2025
TRAP: Targeted Redirecting of Agentic Preferences
TRAP: Targeted Redirecting of Agentic Preferences
Hangoo Kang
Jehyeok Yeon
Gagandeep Singh
AAML
72
0
0
29 May 2025
Understanding Adversarial Training with Energy-based Models
Understanding Adversarial Training with Energy-based Models
Mujtaba Hussain Mirza
Maria Rosaria Briglia
Filippo Bartolucci
Senad Beadini
G. Lisanti
I. Masi
AAML
57
0
0
28 May 2025
How Do Diffusion Models Improve Adversarial Robustness?
How Do Diffusion Models Improve Adversarial Robustness?
Liu Yuezhang
Xue-Xin Wei
296
0
0
28 May 2025
Distributionally Robust Wireless Semantic Communication with Large AI Models
Distributionally Robust Wireless Semantic Communication with Large AI Models
Long Tan Le
Senura Hansaja Wanasekara
Zerun Niu
Yansong Shi
Nguyen Tran
...
Walid Saad
Dusit Niyato
Zhu Han
Choong Seon Hong
H. V. Poor
18
0
0
28 May 2025
Efficient Preimage Approximation for Neural Network Certification
Efficient Preimage Approximation for Neural Network Certification
Anton Björklund
Mykola Zaitsev
Marta Kwiatkowska
AAML
24
0
0
28 May 2025
Targeted Unlearning Using Perturbed Sign Gradient Methods With Applications On Medical Images
Targeted Unlearning Using Perturbed Sign Gradient Methods With Applications On Medical Images
George R. Nahass
Zhu Wang
Homa Rashidisabet
Won Hwa Kim
Sasha Hubschman
...
Chad A. Purnell
P. Setabutr
Ann Q. Tran
Darvin Yi
Sathya Ravi
MUOOD
50
0
0
28 May 2025
NatADiff: Adversarial Boundary Guidance for Natural Adversarial Diffusion
NatADiff: Adversarial Boundary Guidance for Natural Adversarial Diffusion
Max Collins
Jordan Vice
T. French
Ajmal Mian
DiffM
48
0
0
27 May 2025
One-Time Soft Alignment Enables Resilient Learning without Weight Transport
One-Time Soft Alignment Enables Resilient Learning without Weight Transport
Jeonghwan Cheon
Jaehyuk Bae
Se-Bum Paik
ODL
51
1
0
27 May 2025
Breaking Dataset Boundaries: Class-Agnostic Targeted Adversarial Attacks
Breaking Dataset Boundaries: Class-Agnostic Targeted Adversarial Attacks
Taïga Gonçalves
Tomo Miyazaki
S. Omachi
OODAAML
79
0
0
27 May 2025
Preventing Adversarial AI Attacks Against Autonomous Situational Awareness: A Maritime Case Study
Preventing Adversarial AI Attacks Against Autonomous Situational Awareness: A Maritime Case Study
Mathew J. Walter
Aaron Barrett
Kimberly Tam
AAML
32
1
0
27 May 2025
Is Your LLM Overcharging You? Tokenization, Transparency, and Incentives
Is Your LLM Overcharging You? Tokenization, Transparency, and Incentives
Ander Artola Velasco
Stratis Tsirtsis
Nastaran Okati
Manuel Gomez Rodriguez
63
1
0
27 May 2025
Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment
Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment
Xiaojun Jia
Sensen Gao
Simeng Qin
Tianyu Pang
C. Du
Yihao Huang
Xinfeng Li
Yiming Li
Bo Li
Yang Liu
AAML
44
0
0
27 May 2025
Breaking the Ceiling: Exploring the Potential of Jailbreak Attacks through Expanding Strategy Space
Breaking the Ceiling: Exploring the Potential of Jailbreak Attacks through Expanding Strategy Space
Yao Huang
Yitong Sun
Shouwei Ruan
Yichi Zhang
Yinpeng Dong
Xingxing Wei
AAML
48
0
0
27 May 2025
VideoMarkBench: Benchmarking Robustness of Video Watermarking
VideoMarkBench: Benchmarking Robustness of Video Watermarking
Zhengyuan Jiang
Moyang Guo
Kecen Li
Yuepeng Hu
Yupu Wang
Zhicong Huang
Cheng Hong
Neil Zhenqiang Gong
AAML
28
0
0
27 May 2025
A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment
A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment
Brett Bissey
Kyle Gatesman
Walker Dimon
Mohammad Alam
Luis Robaina
Joseph Weissman
AAML
43
0
0
27 May 2025
Are Time-Series Foundation Models Deployment-Ready? A Systematic Study of Adversarial Robustness Across Domains
Are Time-Series Foundation Models Deployment-Ready? A Systematic Study of Adversarial Robustness Across Domains
Jiawen Zhang
Zhenwei Zhang
Shun Zheng
Xumeng Wen
Jia Li
Jiang Bian
AI4TSAAML
163
0
0
26 May 2025
Model Stitching by Functional Latent Alignment
Model Stitching by Functional Latent Alignment
Ioannis Athanasiadis
Anmar Karmush
Michael Felsberg
54
0
0
26 May 2025
TESSER: Transfer-Enhancing Adversarial Attacks from Vision Transformers via Spectral and Semantic Regularization
TESSER: Transfer-Enhancing Adversarial Attacks from Vision Transformers via Spectral and Semantic Regularization
Amira Guesmi
B. Ouni
Muhammad Shafique
AAML
233
0
0
26 May 2025
Deconstructing Obfuscation: A four-dimensional framework for evaluating Large Language Models assembly code deobfuscation capabilities
Deconstructing Obfuscation: A four-dimensional framework for evaluating Large Language Models assembly code deobfuscation capabilities
Anton Tkachenko
Dmitrij Suskevic
Benjamin Adolphi
60
0
0
26 May 2025
Diagnosing and Mitigating Modality Interference in Multimodal Large Language Models
Diagnosing and Mitigating Modality Interference in Multimodal Large Language Models
Rui Cai
Bangzheng Li
Xiaofei Wen
Muhao Chen
Zhe Zhao
24
0
0
26 May 2025
Novel Loss-Enhanced Universal Adversarial Patches for Sustainable Speaker Privacy
Novel Loss-Enhanced Universal Adversarial Patches for Sustainable Speaker Privacy
Elvir Karimov
Alexander Varlamov
Danil Ivanov
Dmitrii Korzh
Oleg Y. Rogov
AAML
34
0
0
26 May 2025
DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation
DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation
Pingzhi Li
Zhen Tan
Huaizhi Qu
Huan Liu
Tianlong Chen
AAML
48
0
0
26 May 2025
Comparing Neural Network Encodings for Logic-based Explainability
Comparing Neural Network Encodings for Logic-based Explainability
Levi Cordeiro Carvalho
Saulo A. F. Oliveira
Thiago Alves Rocha
AAML
170
0
0
26 May 2025
MultiPhishGuard: An LLM-based Multi-Agent System for Phishing Email Detection
MultiPhishGuard: An LLM-based Multi-Agent System for Phishing Email Detection
Yinuo Xue
Eric Spero
Yun Sing Koh
Giovanni Russello
AAML
26
1
0
26 May 2025
One Surrogate to Fool Them All: Universal, Transferable, and Targeted Adversarial Attacks with CLIP
One Surrogate to Fool Them All: Universal, Transferable, and Targeted Adversarial Attacks with CLIP
Binyan Xu
Xilin Dai
Di Tang
Kehuan Zhang
AAML
22
0
0
26 May 2025
Attention! You Vision Language Model Could Be Maliciously Manipulated
Attention! You Vision Language Model Could Be Maliciously Manipulated
Xiaosen Wang
Shaokang Wang
Zhijin Ge
Yuyang Luo
Shudong Zhang
AAMLVLM
37
0
0
26 May 2025
GhostPrompt: Jailbreaking Text-to-image Generative Models based on Dynamic Optimization
GhostPrompt: Jailbreaking Text-to-image Generative Models based on Dynamic Optimization
Zixuan Chen
Hao Lin
Ke Xu
Xinghao Jiang
Tanfeng Sun
47
0
0
25 May 2025
Previous
12345...165166167
Next