ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.04644
  4. Cited By
Towards Evaluating the Robustness of Neural Networks
v1v2 (latest)

Towards Evaluating the Robustness of Neural Networks

16 August 2016
Nicholas Carlini
D. Wagner
    OODAAML
ArXiv (abs)PDFHTML

Papers citing "Towards Evaluating the Robustness of Neural Networks"

50 / 4,015 papers shown
Title
Adversarial Prompt Distillation for Vision-Language Models
Adversarial Prompt Distillation for Vision-Language Models
Lin Luo
Xin Wang
Bojia Zi
Shihao Zhao
Xingjun Ma
Yu-Gang Jiang
AAMLVLM
184
4
0
22 Nov 2024
Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks
Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks
Yong Xie
Weijie Zheng
Hanxun Huang
Guangnan Ye
Xingjun Ma
AAML
170
1
0
20 Nov 2024
CDI: Copyrighted Data Identification in Diffusion Models
CDI: Copyrighted Data Identification in Diffusion Models
Jan Dubiñski
Antoni Kowalczuk
Franziska Boenisch
Adam Dziedzic
128
2
0
19 Nov 2024
Exploring adversarial robustness of JPEG AI: methodology, comparison and new methods
Egor Kovalev
Georgii Bychkov
Khaled Abud
A. Gushchin
Anna Chistyakova
Sergey Lavrushkin
D. Vatolin
Anastasia Antsiferova
AAML
177
2
0
18 Nov 2024
A Hard-Label Cryptanalytic Extraction of Non-Fully Connected Deep Neural
  Networks using Side-Channel Attacks
A Hard-Label Cryptanalytic Extraction of Non-Fully Connected Deep Neural Networks using Side-Channel Attacks
Benoît Coqueret
Mathieu Carbone
Olivier Sentieys
Gabriel Zaid
AAMLMLAU
95
0
0
15 Nov 2024
Transferable Adversarial Attacks against ASR
Transferable Adversarial Attacks against ASR
Xiaoxue Gao
Zexin Li
Yiming Chen
Cong Liu
Haoyang Li
AAML
62
1
0
14 Nov 2024
Computable Model-Independent Bounds for Adversarial Quantum Machine
  Learning
Computable Model-Independent Bounds for Adversarial Quantum Machine Learning
Bacui Li
T. Alpcan
Chandra Thapa
Udaya Parampalli
AAML
73
0
0
11 Nov 2024
Adversarial Detection with a Dynamically Stable System
Adversarial Detection with a Dynamically Stable System
Xiaowei Long
Jie Lin
Xiangyuan Yang
AAML
79
0
0
11 Nov 2024
A Survey of AI-Related Cyber Security Risks and Countermeasures in
  Mobility-as-a-Service
A Survey of AI-Related Cyber Security Risks and Countermeasures in Mobility-as-a-Service
Kai-Fung Chu
Haiyue Yuan
Jinsheng Yuan
Weisi Guo
Nazmiye Balta-Ozkan
Shujun Li
80
3
0
08 Nov 2024
Game-Theoretic Defenses for Robust Conformal Prediction Against Adversarial Attacks in Medical Imaging
Game-Theoretic Defenses for Robust Conformal Prediction Against Adversarial Attacks in Medical Imaging
Rui Luo
Jie Bao
Zhixin Zhou
Chuangyin Dang
MedImAAML
250
7
0
07 Nov 2024
Layer-wise Alignment: Examining Safety Alignment Across Image Encoder Layers in Vision Language Models
Layer-wise Alignment: Examining Safety Alignment Across Image Encoder Layers in Vision Language Models
Saketh Bachu
Erfan Shayegani
Trishna Chakraborty
Rohit Lal
Arindam Dutta
Chengyu Song
Yue Dong
Nael B. Abu-Ghazaleh
Amit K. Roy-Chowdhury
75
0
0
06 Nov 2024
Enhancing Adversarial Robustness via Uncertainty-Aware Distributional
  Adversarial Training
Enhancing Adversarial Robustness via Uncertainty-Aware Distributional Adversarial Training
Junhao Dong
Xinghua Qu
Zhiyuan Wang
Yew-Soon Ong
AAML
96
1
0
05 Nov 2024
Improving Domain Generalization in Self-supervised Monocular Depth
  Estimation via Stabilized Adversarial Training
Improving Domain Generalization in Self-supervised Monocular Depth Estimation via Stabilized Adversarial Training
Yuanqi Yao
Gang Wu
Kui Jiang
Siao Liu
Jian Kuai
Xianming Liu
Junjun Jiang
MDE
95
1
0
04 Nov 2024
User-wise Perturbations for User Identity Protection in EEG-Based BCIs
User-wise Perturbations for User Identity Protection in EEG-Based BCIs
Xiaoqing Chen
Siyang Li
Yunlu Tu
Ziwei Wang
Dongrui Wu
71
2
0
04 Nov 2024
DiffPAD: Denoising Diffusion-based Adversarial Patch Decontamination
DiffPAD: Denoising Diffusion-based Adversarial Patch Decontamination
Jia Fu
Xiao Zhang
Sepideh Pashami
Fatemeh Rahimian
Anders Holst
DiffMAAML
87
0
0
31 Oct 2024
Keep on Swimming: Real Attackers Only Need Partial Knowledge of a
  Multi-Model System
Keep on Swimming: Real Attackers Only Need Partial Knowledge of a Multi-Model System
Julian Collado
Kevin Stangl
AAML
62
0
0
30 Oct 2024
Estimating Neural Network Robustness via Lipschitz Constant and
  Architecture Sensitivity
Estimating Neural Network Robustness via Lipschitz Constant and Architecture Sensitivity
Abulikemu Abuduweili
Changliu Liu
38
2
0
30 Oct 2024
One Prompt to Verify Your Models: Black-Box Text-to-Image Models Verification via Non-Transferable Adversarial Attacks
One Prompt to Verify Your Models: Black-Box Text-to-Image Models Verification via Non-Transferable Adversarial Attacks
Ji Guo
Wenbo Jiang
Rui Zhang
Guoming Lu
Hongwei Li
AAML
162
0
0
30 Oct 2024
CausAdv: A Causal-based Framework for Detecting Adversarial Examples
CausAdv: A Causal-based Framework for Detecting Adversarial Examples
Hichem Debbi
CMLAAML
79
1
0
29 Oct 2024
Longitudinal Mammogram Exam-based Breast Cancer Diagnosis Models:
  Vulnerability to Adversarial Attacks
Longitudinal Mammogram Exam-based Breast Cancer Diagnosis Models: Vulnerability to Adversarial Attacks
Zhengbo Zhou
Degan Hao
Dooman Arefan
M. Zuley
J. Sumkin
Shandong Wu
AAML
110
0
0
29 Oct 2024
Text-Guided Attention is All You Need for Zero-Shot Robustness in
  Vision-Language Models
Text-Guided Attention is All You Need for Zero-Shot Robustness in Vision-Language Models
Lu Yu
Haiyang Zhang
Changsheng Xu
AAMLVLM
106
7
0
29 Oct 2024
Towards Trustworthy Machine Learning in Production: An Overview of the
  Robustness in MLOps Approach
Towards Trustworthy Machine Learning in Production: An Overview of the Robustness in MLOps Approach
Firas Bayram
Bestoun S. Ahmed
OOD
63
2
0
28 Oct 2024
Evaluating the Robustness of LiDAR Point Cloud Tracking Against Adversarial Attack
Evaluating the Robustness of LiDAR Point Cloud Tracking Against Adversarial Attack
Shengjing Tian
Yinan Han
Xiantong Zhao
Bin Liu
Xiuping Liu
AAML
76
0
0
28 Oct 2024
Adversarial Attacks on Large Language Models Using Regularized
  Relaxation
Adversarial Attacks on Large Language Models Using Regularized Relaxation
Samuel Jacob Chacko
Sajib Biswas
Chashi Mahiul Islam
Fatema Tabassum Liza
Xiuwen Liu
AAML
87
3
0
24 Oct 2024
Test-time Adversarial Defense with Opposite Adversarial Path and High Attack Time Cost
Test-time Adversarial Defense with Opposite Adversarial Path and High Attack Time Cost
Cheng-Han Yeh
Kuanchun Yu
Chun-Shien Lu
DiffMAAML
162
0
0
22 Oct 2024
Model Mimic Attack: Knowledge Distillation for Provably Transferable
  Adversarial Examples
Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples
Kirill Lukyanov
Andrew Perminov
D. Turdakov
Mikhail Pautov
AAML
58
2
0
21 Oct 2024
S-CFE: Simple Counterfactual Explanations
S-CFE: Simple Counterfactual Explanations
Shpresim Sadiku
Moritz Wagner
Sai Ganesh Nagarajan
Sebastian Pokutta
145
1
0
21 Oct 2024
Dynamic Intelligence Assessment: Benchmarking LLMs on the Road to AGI
  with a Focus on Model Confidence
Dynamic Intelligence Assessment: Benchmarking LLMs on the Road to AGI with a Focus on Model Confidence
Norbert Tihanyi
Tamás Bisztray
Richard A. Dubniczky
Rebeka Tóth
B. Borsos
...
Ryan Marinelli
Lucas C. Cordeiro
Merouane Debbah
Vasileios Mavroeidis
Audun Josang
95
5
0
20 Oct 2024
Faster-GCG: Efficient Discrete Optimization Jailbreak Attacks against
  Aligned Large Language Models
Faster-GCG: Efficient Discrete Optimization Jailbreak Attacks against Aligned Large Language Models
Xiao-Li Li
Zhuhong Li
Qiongxiu Li
Bingze Lee
Jinghao Cui
Xiaolin Hu
AAML
58
5
0
20 Oct 2024
SLIC: Secure Learned Image Codec through Compressed Domain Watermarking
  to Defend Image Manipulation
SLIC: Secure Learned Image Codec through Compressed Domain Watermarking to Defend Image Manipulation
Chen-Hsiu Huang
Ja-Ling Wu
52
1
0
19 Oct 2024
Adversarial Training: A Survey
Adversarial Training: A Survey
Mengnan Zhao
Lihe Zhang
Jingwen Ye
Huchuan Lu
Baocai Yin
Xinchao Wang
AAML
84
1
0
19 Oct 2024
Boosting Imperceptibility of Stable Diffusion-based Adversarial Examples
  Generation with Momentum
Boosting Imperceptibility of Stable Diffusion-based Adversarial Examples Generation with Momentum
Nashrah Haque
Xiang Li
Zhehui Chen
Yanzhao Wu
Lei Yu
Arun Iyengar
Wenqi Wei
DiffMAAML
54
0
0
17 Oct 2024
Low-Rank Adversarial PGD Attack
Low-Rank Adversarial PGD Attack
Dayana Savostianova
Emanuele Zangrando
Francesco Tudisco
AAML
66
1
0
16 Oct 2024
Efficient Optimization Algorithms for Linear Adversarial Training
Efficient Optimization Algorithms for Linear Adversarial Training
Antônio H. Ribeiro
Thomas B. Schon
Dave Zahariah
Francis Bach
AAML
116
2
0
16 Oct 2024
Taking off the Rose-Tinted Glasses: A Critical Look at Adversarial ML
  Through the Lens of Evasion Attacks
Taking off the Rose-Tinted Glasses: A Critical Look at Adversarial ML Through the Lens of Evasion Attacks
Kevin Eykholt
Farhan Ahmed
Pratik Vaishnavi
Amir Rahmati
AAML
99
0
0
15 Oct 2024
Automatically Generating Visual Hallucination Test Cases for Multimodal
  Large Language Models
Automatically Generating Visual Hallucination Test Cases for Multimodal Large Language Models
Zhongye Liu
Hongbin Liu
Yuepeng Hu
Zedian Shao
Neil Zhenqiang Gong
VLMMLLM
51
0
0
15 Oct 2024
Out-of-Bounding-Box Triggers: A Stealthy Approach to Cheat Object
  Detectors
Out-of-Bounding-Box Triggers: A Stealthy Approach to Cheat Object Detectors
Tao Lin
Lijia Yu
Gaojie Jin
Renjue Li
Peng Wu
Lijun Zhang
AAML
110
1
0
14 Oct 2024
Embedding Self-Correction as an Inherent Ability in Large Language Models for Enhanced Mathematical Reasoning
Embedding Self-Correction as an Inherent Ability in Large Language Models for Enhanced Mathematical Reasoning
Kuofeng Gao
Huanqia Cai
Qingyao Shuai
Dihong Gong
Zhifeng Li
LRMReLM
122
1
0
14 Oct 2024
On the Adversarial Transferability of Generalized "Skip Connections"
On the Adversarial Transferability of Generalized "Skip Connections"
Yisen Wang
Yichuan Mo
Dongxian Wu
Mingjie Li
Xingjun Ma
Zhouchen Lin
AAML
72
2
0
11 Oct 2024
Natural Language Induced Adversarial Images
Natural Language Induced Adversarial Images
Xiaopei Zhu
Peiyang Xu
Guanning Zeng
Yingpeng Dong
Xiaolin Hu
AAML
77
2
0
11 Oct 2024
Adversarial Training Can Provably Improve Robustness: Theoretical Analysis of Feature Learning Process Under Structured Data
Adversarial Training Can Provably Improve Robustness: Theoretical Analysis of Feature Learning Process Under Structured Data
Binghui Li
Yuanzhi Li
OOD
96
4
0
11 Oct 2024
Adversarial Robustness Overestimation and Instability in TRADES
Adversarial Robustness Overestimation and Instability in TRADES
Jonathan Weiping Li
Ren-Wei Liang
Cheng-Han Yeh
Cheng-Chang Tsai
Kuanchun Yu
Chun-Shien Lu
Shang-Tse Chen
AAML
95
0
0
10 Oct 2024
Adversarial Vulnerability as a Consequence of On-Manifold Inseparibility
Adversarial Vulnerability as a Consequence of On-Manifold Inseparibility
Rajdeep Haldar
Yue Xing
Qifan Song
Guang Lin
56
0
0
09 Oct 2024
Break the Visual Perception: Adversarial Attacks Targeting Encoded
  Visual Tokens of Large Vision-Language Models
Break the Visual Perception: Adversarial Attacks Targeting Encoded Visual Tokens of Large Vision-Language Models
Yubo Wang
Chaohu Liu
Yanqiu Qu
Haoyu Cao
Deqiang Jiang
Linli Xu
MLLMAAML
56
3
0
09 Oct 2024
Robustness Reprogramming for Representation Learning
Robustness Reprogramming for Representation Learning
Zhichao Hou
MohamadAli Torkamani
Hamid Krim
Xiaorui Liu
AAMLOOD
81
1
0
06 Oct 2024
Impact of Regularization on Calibration and Robustness: from the
  Representation Space Perspective
Impact of Regularization on Calibration and Robustness: from the Representation Space Perspective
Jonghyun Park
Juyeop Kim
Jong-Seok Lee
81
1
0
05 Oct 2024
Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks
Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks
Zi Wang
Divyam Anshumaan
Ashish Hooda
Yudong Chen
Somesh Jha
AAML
98
0
0
05 Oct 2024
RAFT: Realistic Attacks to Fool Text Detectors
RAFT: Realistic Attacks to Fool Text Detectors
James Wang
Ran Li
Junfeng Yang
Chengzhi Mao
AAMLDeLMO
65
4
0
04 Oct 2024
Unveiling AI's Blind Spots: An Oracle for In-Domain, Out-of-Domain, and Adversarial Errors
Unveiling AI's Blind Spots: An Oracle for In-Domain, Out-of-Domain, and Adversarial Errors
Shuangpeng Han
Mengmi Zhang
415
0
0
03 Oct 2024
Impact of White-Box Adversarial Attacks on Convolutional Neural Networks
Impact of White-Box Adversarial Attacks on Convolutional Neural Networks
Rakesh Podder
Sudipto Ghosh
AAML
60
1
0
02 Oct 2024
Previous
123456...798081
Next