ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.04644
  4. Cited By
Towards Evaluating the Robustness of Neural Networks

Towards Evaluating the Robustness of Neural Networks

16 August 2016
Nicholas Carlini
D. Wagner
    OOD
    AAML
ArXivPDFHTML

Papers citing "Towards Evaluating the Robustness of Neural Networks"

50 / 1,465 papers shown
Title
Robust Localization of Key Fob Using Channel Impulse Response of Ultra
  Wide Band Sensors for Keyless Entry Systems
Robust Localization of Key Fob Using Channel Impulse Response of Ultra Wide Band Sensors for Keyless Entry Systems
A. Kolli
Filippo Casamassima
Horst Possegger
Horst Bischof
AAML
23
1
0
16 Jan 2024
Machine unlearning through fine-grained model parameters perturbation
Machine unlearning through fine-grained model parameters perturbation
Zhiwei Zuo
Zhuo Tang
KenLi Li
Anwitaman Datta
AAML
MU
26
0
0
09 Jan 2024
Deep Learning for Code Intelligence: Survey, Benchmark and Toolkit
Deep Learning for Code Intelligence: Survey, Benchmark and Toolkit
Yao Wan
Yang He
Zhangqian Bi
Jianguo Zhang
Hongyu Zhang
Yulei Sui
Guandong Xu
Hai Jin
Philip S. Yu
37
20
0
30 Dec 2023
DOEPatch: Dynamically Optimized Ensemble Model for Adversarial Patches
  Generation
DOEPatch: Dynamically Optimized Ensemble Model for Adversarial Patches Generation
Wenyi Tan
Yang Li
Chenxing Zhao
Zhunga Liu
Quanbiao Pan
AAML
23
3
0
28 Dec 2023
Robustness Verification for Knowledge-Based Logic of Risky Driving
  Scenes
Robustness Verification for Knowledge-Based Logic of Risky Driving Scenes
Xia Wang
Anda Liang
Jonathan Sprinkle
Taylor T. Johnson
24
4
0
27 Dec 2023
How Smooth Is Attention?
How Smooth Is Attention?
Valérie Castin
Pierre Ablin
Gabriel Peyré
AAML
40
9
0
22 Dec 2023
ARBiBench: Benchmarking Adversarial Robustness of Binarized Neural
  Networks
ARBiBench: Benchmarking Adversarial Robustness of Binarized Neural Networks
Peng Zhao
Jiehua Zhang
Bowen Peng
Longguang Wang
Yingmei Wei
Yu Liu
Li Liu
AAML
32
0
0
21 Dec 2023
PGN: A perturbation generation network against deep reinforcement
  learning
PGN: A perturbation generation network against deep reinforcement learning
Xiangjuan Li
Feifan Li
Yang Li
Quanbiao Pan
AAML
27
2
0
20 Dec 2023
The Adaptive Arms Race: Redefining Robustness in AI Security
The Adaptive Arms Race: Redefining Robustness in AI Security
Ilias Tsingenopoulos
Vera Rimmer
Davy Preuveneers
Fabio Pierazzi
Lorenzo Cavallaro
Wouter Joosen
AAML
74
0
0
20 Dec 2023
Continual Adversarial Defense
Continual Adversarial Defense
Qian Wang
Yaoyao Liu
Hefei Ling
Yingwei Li
Qihao Liu
Ping Li
AAML
64
3
0
15 Dec 2023
DTA: Distribution Transform-based Attack for Query-Limited Scenario
DTA: Distribution Transform-based Attack for Query-Limited Scenario
Renyang Liu
Wei Zhou
Xin Jin
Song Gao
Yuanyu Wang
Ruxin Wang
18
0
0
12 Dec 2023
MIMIR: Masked Image Modeling for Mutual Information-based Adversarial Robustness
MIMIR: Masked Image Modeling for Mutual Information-based Adversarial Robustness
Xiaoyun Xu
Shujian Yu
Jingzheng Wu
S. Picek
AAML
35
0
0
08 Dec 2023
Defense Against Adversarial Attacks using Convolutional Auto-Encoders
Defense Against Adversarial Attacks using Convolutional Auto-Encoders
Shreyasi Mandal
AAML
25
1
0
06 Dec 2023
Indirect Gradient Matching for Adversarial Robust Distillation
Indirect Gradient Matching for Adversarial Robust Distillation
Hongsin Lee
Seungju Cho
Changick Kim
AAML
FedML
53
2
0
06 Dec 2023
InstructTA: Instruction-Tuned Targeted Attack for Large Vision-Language
  Models
InstructTA: Instruction-Tuned Targeted Attack for Large Vision-Language Models
Xunguang Wang
Zhenlan Ji
Pingchuan Ma
Zongjie Li
Shuai Wang
MLLM
43
11
0
04 Dec 2023
Improving Feature Stability during Upsampling -- Spectral Artifacts and
  the Importance of Spatial Context
Improving Feature Stability during Upsampling -- Spectral Artifacts and the Importance of Spatial Context
Shashank Agnihotri
Julia Grabinski
M. Keuper
30
6
0
29 Nov 2023
Vulnerability Analysis of Transformer-based Optical Character
  Recognition to Adversarial Attacks
Vulnerability Analysis of Transformer-based Optical Character Recognition to Adversarial Attacks
Lucas Beerens
D. Higham
36
1
0
28 Nov 2023
CLAP: Isolating Content from Style through Contrastive Learning with Augmented Prompts
CLAP: Isolating Content from Style through Contrastive Learning with Augmented Prompts
Yichao Cai
Yuhang Liu
Zhen Zhang
Javen Qinfeng Shi
CLIP
VLM
34
5
0
28 Nov 2023
Mixing Classifiers to Alleviate the Accuracy-Robustness Trade-Off
Mixing Classifiers to Alleviate the Accuracy-Robustness Trade-Off
Yatong Bai
Brendon G. Anderson
Somayeh Sojoudi
AAML
27
2
0
26 Nov 2023
When Side-Channel Attacks Break the Black-Box Property of Embedded
  Artificial Intelligence
When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence
Benoît Coqueret
Mathieu Carbone
Olivier Sentieys
Gabriel Zaid
58
2
0
23 Nov 2023
A Survey of Adversarial CAPTCHAs on its History, Classification and
  Generation
A Survey of Adversarial CAPTCHAs on its History, Classification and Generation
Zisheng Xu
Qiao Yan
Fei Yu
Victor C.M. Leung
AAML
29
1
0
22 Nov 2023
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Guangjing Wang
Ce Zhou
Yuanda Wang
Bocheng Chen
Hanqing Guo
Qiben Yan
AAML
SILM
68
3
0
20 Nov 2023
PACOL: Poisoning Attacks Against Continual Learners
PACOL: Poisoning Attacks Against Continual Learners
Huayu Li
G. Ditzler
AAML
25
2
0
18 Nov 2023
Interpretable Reinforcement Learning for Robotics and Continuous Control
Interpretable Reinforcement Learning for Robotics and Continuous Control
Rohan R. Paleja
Letian Chen
Yaru Niu
Andrew Silva
Zhaoxin Li
...
K. Chang
H. E. Tseng
Yan Wang
S. Nageshrao
Matthew C. Gombolay
37
7
0
16 Nov 2023
Fast Certification of Vision-Language Models Using Incremental
  Randomized Smoothing
Fast Certification of Vision-Language Models Using Incremental Randomized Smoothing
Ashutosh Nirala
Ameya Joshi
Chinmay Hegde
S Sarkar
VLM
36
0
0
15 Nov 2023
On The Relationship Between Universal Adversarial Attacks And Sparse
  Representations
On The Relationship Between Universal Adversarial Attacks And Sparse Representations
Dana Weitzner
Raja Giryes
AAML
32
0
0
14 Nov 2023
Intriguing Properties of Data Attribution on Diffusion Models
Intriguing Properties of Data Attribution on Diffusion Models
Xiaosen Zheng
Tianyu Pang
Chao Du
Jing Jiang
Min-Bin Lin
TDI
34
20
1
01 Nov 2023
LipSim: A Provably Robust Perceptual Similarity Metric
LipSim: A Provably Robust Perceptual Similarity Metric
Sara Ghazanfari
Alexandre Araujo
Prashanth Krishnamurthy
Farshad Khorrami
Siddharth Garg
46
5
0
27 Oct 2023
Theoretically Grounded Loss Functions and Algorithms for Score-Based
  Multi-Class Abstention
Theoretically Grounded Loss Functions and Algorithms for Score-Based Multi-Class Abstention
Anqi Mao
M. Mohri
Yutao Zhong
32
22
0
23 Oct 2023
Adversarial Image Generation by Spatial Transformation in Perceptual
  Colorspaces
Adversarial Image Generation by Spatial Transformation in Perceptual Colorspaces
A. Aydin
A. Temi̇zel
43
4
0
21 Oct 2023
PatchCURE: Improving Certifiable Robustness, Model Utility, and
  Computation Efficiency of Adversarial Patch Defenses
PatchCURE: Improving Certifiable Robustness, Model Utility, and Computation Efficiency of Adversarial Patch Defenses
Chong Xiang
Tong Wu
Sihui Dai
Jonathan Petit
Suman Jana
Prateek Mittal
49
2
0
19 Oct 2023
Adversarial Training for Physics-Informed Neural Networks
Adversarial Training for Physics-Informed Neural Networks
Yao Li
Shengzhu Shi
Zhichang Guo
Boying Wu
AAML
PINN
30
0
0
18 Oct 2023
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
S. M. Fazle
J. Mondal
Meem Arafat Manab
Xi Xiao
Sarfaraz Newaz
AAML
27
0
0
18 Oct 2023
Evading Detection Actively: Toward Anti-Forensics against Forgery
  Localization
Evading Detection Actively: Toward Anti-Forensics against Forgery Localization
Long Zhuo
Shenghai Luo
Shunquan Tan
Han Chen
Bin Li
Jiwu Huang
AAML
32
0
0
16 Oct 2023
A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
  Neural Networks
A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
Yang Wang
B. Dong
Ke Xu
Haiyin Piao
Yufei Ding
Baocai Yin
Xin Yang
AAML
39
3
0
10 Oct 2023
PAC-Bayesian Spectrally-Normalized Bounds for Adversarially Robust
  Generalization
PAC-Bayesian Spectrally-Normalized Bounds for Adversarially Robust Generalization
Jiancong Xiao
Ruoyu Sun
Zhimin Luo
AAML
38
6
0
09 Oct 2023
Tight Certified Robustness via Min-Max Representations of ReLU Neural
  Networks
Tight Certified Robustness via Min-Max Representations of ReLU Neural Networks
Brendon G. Anderson
Samuel Pfrommer
Somayeh Sojoudi
OOD
34
1
0
07 Oct 2023
Generating Less Certain Adversarial Examples Improves Robust Generalization
Generating Less Certain Adversarial Examples Improves Robust Generalization
Minxing Zhang
Michael Backes
Xiao Zhang
AAML
40
1
0
06 Oct 2023
A Survey of Robustness and Safety of 2D and 3D Deep Learning Models
  Against Adversarial Attacks
A Survey of Robustness and Safety of 2D and 3D Deep Learning Models Against Adversarial Attacks
Yanjie Li
Bin Xie
Songtao Guo
Yuanyuan Yang
Bin Xiao
AAML
40
16
0
01 Oct 2023
Improving Machine Learning Robustness via Adversarial Training
Improving Machine Learning Robustness via Adversarial Training
Long Dang
T. Hapuarachchi
Kaiqi Xiong
Jing Lin
OOD
AAML
38
2
0
22 Sep 2023
Understanding Pose and Appearance Disentanglement in 3D Human Pose
  Estimation
Understanding Pose and Appearance Disentanglement in 3D Human Pose Estimation
K. K. Nakka
Mathieu Salzmann
DRL
CoGe
26
2
0
20 Sep 2023
Adversarial Attacks Against Uncertainty Quantification
Adversarial Attacks Against Uncertainty Quantification
Emanuele Ledda
Daniele Angioni
Giorgio Piras
Giorgio Fumera
Battista Biggio
Fabio Roli
AAML
35
2
0
19 Sep 2023
A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in
  Machine Unlearning Services
A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in Machine Unlearning Services
Hongsheng Hu
Shuo Wang
Jiamin Chang
Haonan Zhong
Ruoxi Sun
Shuang Hao
Haojin Zhu
Minhui Xue
MU
21
26
0
15 Sep 2023
Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
  Detection
Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake Detection
Weijie Wang
Zhengyu Zhao
N. Sebe
Bruno Lepri
AAML
37
2
0
03 Sep 2023
Input margins can predict generalization too
Input margins can predict generalization too
Coenraad Mouton
Marthinus W. Theunissen
Marelie Hattingh Davel
AAML
UQCV
AI4CE
23
3
0
29 Aug 2023
RecRec: Algorithmic Recourse for Recommender Systems
RecRec: Algorithmic Recourse for Recommender Systems
Sahil Verma
Ashudeep Singh
Varich Boonsanong
John P. Dickerson
Chirag Shah
33
1
0
28 Aug 2023
Enhancing Adversarial Attacks: The Similar Target Method
Enhancing Adversarial Attacks: The Similar Target Method
Shuo Zhang
Ziruo Wang
Zikai Zhou
Huanran Chen
AAML
54
1
0
21 Aug 2023
Measuring the Effect of Causal Disentanglement on the Adversarial
  Robustness of Neural Network Models
Measuring the Effect of Causal Disentanglement on the Adversarial Robustness of Neural Network Models
Preben Ness
D. Marijan
Sunanda Bose
CML
29
0
0
21 Aug 2023
Boosting Adversarial Transferability by Block Shuffle and Rotation
Boosting Adversarial Transferability by Block Shuffle and Rotation
Kunyu Wang
Xu He
Wenxuan Wang
Xiaosen Wang
AAML
31
36
0
20 Aug 2023
Robust Mixture-of-Expert Training for Convolutional Neural Networks
Robust Mixture-of-Expert Training for Convolutional Neural Networks
Yihua Zhang
Ruisi Cai
Tianlong Chen
Guanhua Zhang
Huan Zhang
Pin-Yu Chen
Shiyu Chang
Zhangyang Wang
Sijia Liu
MoE
AAML
OOD
34
16
0
19 Aug 2023
Previous
12345...282930
Next