ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.07690
  4. Cited By
A Boundary Tilting Persepective on the Phenomenon of Adversarial
  Examples

A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples

27 August 2016
T. Tanay
Lewis D. Griffin
    AAML
ArXivPDFHTML

Papers citing "A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples"

50 / 60 papers shown
Title
Towards Label-Only Membership Inference Attack against Pre-trained Large Language Models
Towards Label-Only Membership Inference Attack against Pre-trained Large Language Models
Yu He
Boheng Li
Lu Liu
Zhongjie Ba
Wei Dong
Yiming Li
Zengchang Qin
Kui Ren
Chong Chen
MIALM
79
0
0
26 Feb 2025
A High Dimensional Statistical Model for Adversarial Training: Geometry and Trade-Offs
A High Dimensional Statistical Model for Adversarial Training: Geometry and Trade-Offs
Kasimir Tanner
Matteo Vilucchio
Bruno Loureiro
Florent Krzakala
AAML
68
0
0
31 Dec 2024
Exploiting the Layered Intrinsic Dimensionality of Deep Models for
  Practical Adversarial Training
Exploiting the Layered Intrinsic Dimensionality of Deep Models for Practical Adversarial Training
Enes Altinisik
Safa Messaoud
Husrev Taha Sencar
Hassan Sajjad
Sanjay Chawla
AAML
53
0
0
27 May 2024
Training Image Derivatives: Increased Accuracy and Universal Robustness
Training Image Derivatives: Increased Accuracy and Universal Robustness
V. Avrutskiy
51
0
0
21 Oct 2023
Assessing Privacy Risks in Language Models: A Case Study on
  Summarization Tasks
Assessing Privacy Risks in Language Models: A Case Study on Summarization Tasks
Ruixiang Tang
Gord Lueck
Rodolfo Quispe
Huseyin A. Inan
Janardhan Kulkarni
Xia Hu
34
6
0
20 Oct 2023
Improve Video Representation with Temporal Adversarial Augmentation
Improve Video Representation with Temporal Adversarial Augmentation
Jinhao Duan
Quanfu Fan
Hao-Ran Cheng
Xiaoshuang Shi
Kaidi Xu
AAML
AI4TS
ViT
33
2
0
28 Apr 2023
It Is All About Data: A Survey on the Effects of Data on Adversarial
  Robustness
It Is All About Data: A Survey on the Effects of Data on Adversarial Robustness
Peiyu Xiong
Michael W. Tegegn
Jaskeerat Singh Sarin
Shubhraneel Pal
Julia Rubin
SILM
AAML
37
8
0
17 Mar 2023
Linking convolutional kernel size to generalization bias in face
  analysis CNNs
Linking convolutional kernel size to generalization bias in face analysis CNNs
Hao Liang
J. O. Caro
Vikram Maheshri
Ankit B. Patel
Guha Balakrishnan
CVBM
CML
23
0
0
07 Feb 2023
Identifying Adversarially Attackable and Robust Samples
Identifying Adversarially Attackable and Robust Samples
Vyas Raina
Mark Gales
AAML
38
3
0
30 Jan 2023
REAP: A Large-Scale Realistic Adversarial Patch Benchmark
REAP: A Large-Scale Realistic Adversarial Patch Benchmark
Nabeel Hingun
Chawin Sitawarin
Jerry Li
David Wagner
AAML
31
14
0
12 Dec 2022
Textual Manifold-based Defense Against Natural Language Adversarial
  Examples
Textual Manifold-based Defense Against Natural Language Adversarial Examples
D. M. Nguyen
Anh Tuan Luu
AAML
29
17
0
05 Nov 2022
Scattering Model Guided Adversarial Examples for SAR Target Recognition:
  Attack and Defense
Scattering Model Guided Adversarial Examples for SAR Target Recognition: Attack and Defense
Bo Peng
Bo Peng
Jie Zhou
Jianyue Xie
Li Liu
AAML
50
43
0
11 Sep 2022
An Analytic Framework for Robust Training of Artificial Neural Networks
An Analytic Framework for Robust Training of Artificial Neural Networks
R. Barati
Reza Safabakhsh
Mohammad Rahmati
AAML
19
0
0
26 May 2022
A Manifold View of Adversarial Risk
A Manifold View of Adversarial Risk
Wen-jun Zhang
Yikai Zhang
Xiaoling Hu
Mayank Goswami
Chao Chen
Dimitris N. Metaxas
AAML
19
6
0
24 Mar 2022
Evaluating natural language processing models with generalization
  metrics that do not need access to any training or testing data
Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data
Yaoqing Yang
Ryan Theisen
Liam Hodgkinson
Joseph E. Gonzalez
Kannan Ramchandran
Charles H. Martin
Michael W. Mahoney
97
17
0
06 Feb 2022
Image classifiers can not be made robust to small perturbations
Image classifiers can not be made robust to small perturbations
Zheng Dai
David K Gifford
VLM
AAML
36
1
0
07 Dec 2021
Sparse Adversarial Video Attacks with Spatial Transformations
Sparse Adversarial Video Attacks with Spatial Transformations
Ronghui Mu
Wenjie Ruan
Leandro Soriano Marcolino
Q. Ni
AAML
35
18
0
10 Nov 2021
A novel network training approach for open set image recognition
A novel network training approach for open set image recognition
Md Tahmid Hossaina
S. Teng
Guojun Lu
Ferdous Sohel
26
0
0
27 Sep 2021
Advances in adversarial attacks and defenses in computer vision: A
  survey
Advances in adversarial attacks and defenses in computer vision: A survey
Naveed Akhtar
Ajmal Mian
Navid Kardan
M. Shah
AAML
41
236
0
01 Aug 2021
Scale-invariant scale-channel networks: Deep networks that generalise to
  previously unseen scales
Scale-invariant scale-channel networks: Deep networks that generalise to previously unseen scales
Ylva Jansson
T. Lindeberg
13
23
0
11 Jun 2021
Do Input Gradients Highlight Discriminative Features?
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
31
57
0
25 Feb 2021
ROBY: Evaluating the Robustness of a Deep Model by its Decision
  Boundaries
ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries
Jinyin Chen
Zhen Wang
Haibin Zheng
Jun Xiao
Zhaoyan Ming
AAML
27
5
0
18 Dec 2020
Unsupervised Adversarially-Robust Representation Learning on Graphs
Unsupervised Adversarially-Robust Representation Learning on Graphs
Jiarong Xu
Yang Yang
Junru Chen
Chunping Wang
Xin Jiang
Jiangang Lu
Yizhou Sun
SSL
AAML
OOD
38
36
0
04 Dec 2020
Visually Imperceptible Adversarial Patch Attacks on Digital Images
Visually Imperceptible Adversarial Patch Attacks on Digital Images
Yaguan Qian
Jiamin Wang
Bin Wang
Xiang Ling
Zhaoquan Gu
Chunming Wu
Wassim Swaileh
AAML
44
2
0
02 Dec 2020
Characterizing and Taming Model Instability Across Edge Devices
Characterizing and Taming Model Instability Across Edge Devices
Eyal Cidon
Evgenya Pergament
Zain Asgar
Asaf Cidon
Sachin Katti
14
7
0
18 Oct 2020
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
  and Learning
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning
Hongjun Wang
Guanbin Li
Xiaobai Liu
Liang Lin
GAN
AAML
21
22
0
15 Oct 2020
The Intriguing Relation Between Counterfactual Explanations and
  Adversarial Examples
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples
Timo Freiesleben
GAN
46
62
0
11 Sep 2020
Quantifying the Preferential Direction of the Model Gradient in
  Adversarial Training With Projected Gradient Descent
Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent
Ricardo Bigolin Lanfredi
Joyce D. Schroeder
Tolga Tasdizen
27
11
0
10 Sep 2020
Adversarial Machine Learning in Image Classification: A Survey Towards
  the Defender's Perspective
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective
G. R. Machado
Eugênio Silva
R. Goldschmidt
AAML
33
157
0
08 Sep 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
32
73
0
07 Aug 2020
Understanding Adversarial Examples from the Mutual Influence of Images
  and Perturbations
Understanding Adversarial Examples from the Mutual Influence of Images and Perturbations
Chaoning Zhang
Philipp Benz
Tooba Imtiaz
In-So Kweon
SSL
AAML
22
118
0
13 Jul 2020
Boundary thickness and robustness in learning models
Boundary thickness and robustness in learning models
Yaoqing Yang
Rekha Khanna
Yaodong Yu
A. Gholami
Kurt Keutzer
Joseph E. Gonzalez
Kannan Ramchandran
Michael W. Mahoney
OOD
18
37
0
09 Jul 2020
Feature Purification: How Adversarial Training Performs Robust Deep
  Learning
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLT
AAML
39
148
0
20 May 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
  Adversarial Robustness
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OOD
AAML
72
63
0
02 Mar 2020
Softmax-based Classification is k-means Clustering: Formal Proof,
  Consequences for Adversarial Attacks, and Improvement through Centroid Based
  Tailoring
Softmax-based Classification is k-means Clustering: Formal Proof, Consequences for Adversarial Attacks, and Improvement through Centroid Based Tailoring
Sibylle Hess
W. Duivesteijn
Decebal Constantin Mocanu
25
12
0
07 Jan 2020
Impact of Low-bitwidth Quantization on the Adversarial Robustness for
  Embedded Neural Networks
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks
Rémi Bernhard
Pierre-Alain Moëllic
J. Dutertre
AAML
MQ
29
18
0
27 Sep 2019
Defense Against Adversarial Attacks Using Feature Scattering-based
  Adversarial Training
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
Haichao Zhang
Jianyu Wang
AAML
23
230
0
24 Jul 2019
PhysGAN: Generating Physical-World-Resilient Adversarial Examples for
  Autonomous Driving
PhysGAN: Generating Physical-World-Resilient Adversarial Examples for Autonomous Driving
Zelun Kong
Junfeng Guo
Ang Li
Cong Liu
AAML
41
126
0
09 Jul 2019
Adversarial Robustness via Label-Smoothing
Adversarial Robustness via Label-Smoothing
Morgane Goibert
Elvis Dohmatob
AAML
10
18
0
27 Jun 2019
ML-LOO: Detecting Adversarial Examples with Feature Attribution
ML-LOO: Detecting Adversarial Examples with Feature Attribution
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
AAML
22
101
0
08 Jun 2019
Provably scale-covariant continuous hierarchical networks based on
  scale-normalized differential expressions coupled in cascade
Provably scale-covariant continuous hierarchical networks based on scale-normalized differential expressions coupled in cascade
T. Lindeberg
27
19
0
29 May 2019
What Do Adversarially Robust Models Look At?
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
19
5
0
19 May 2019
Bridging Adversarial Robustness and Gradient Interpretability
Bridging Adversarial Robustness and Gradient Interpretability
Beomsu Kim
Junghoon Seo
Taegyun Jeon
AAML
22
39
0
27 Mar 2019
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial
  Perturbations
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations
Saeid Asgari Taghanaki
Kumar Abhishek
Shekoofeh Azizi
Ghassan Hamarneh
AAML
31
40
0
03 Mar 2019
An Information-Theoretic Explanation for the Adversarial Fragility of AI
  Classifiers
An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Hui Xie
Jirong Yi
Weiyu Xu
R. Mudumbai
AAML
29
3
0
27 Jan 2019
Defending Against Universal Perturbations With Shared Adversarial
  Training
Defending Against Universal Perturbations With Shared Adversarial Training
Chaithanya Kumar Mummadi
Thomas Brox
J. H. Metzen
AAML
18
60
0
10 Dec 2018
Robustness via curvature regularization, and vice versa
Robustness via curvature regularization, and vice versa
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
J. Uesato
P. Frossard
AAML
29
318
0
23 Nov 2018
Structured Adversarial Attack: Towards General Implementation and Better
  Interpretability
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
Kaidi Xu
Sijia Liu
Pu Zhao
Pin-Yu Chen
Huan Zhang
Quanfu Fan
Deniz Erdogmus
Yanzhi Wang
Xinyu Lin
AAML
29
160
0
05 Aug 2018
With Friends Like These, Who Needs Adversaries?
With Friends Like These, Who Needs Adversaries?
Saumya Jetley
Nicholas A. Lord
Philip Torr
AAML
21
70
0
11 Jul 2018
Understanding Measures of Uncertainty for Adversarial Example Detection
Understanding Measures of Uncertainty for Adversarial Example Detection
Lewis Smith
Y. Gal
UQCV
57
358
0
22 Mar 2018
12
Next