ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.07277
  4. Cited By
Transferability in Machine Learning: from Phenomena to Black-Box Attacks
  using Adversarial Samples

Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

24 May 2016
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
    SILM
    AAML
ArXivPDFHTML

Papers citing "Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples"

50 / 360 papers shown
Title
Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted
  Attacks
Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted Attacks
Anqi Zhao
Tong Chu
Yahao Liu
Wen Li
Jingjing Li
Lixin Duan
AAML
28
16
0
18 Dec 2022
A Survey on Reinforcement Learning Security with Application to
  Autonomous Driving
A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Ambra Demontis
Maura Pintor
Christian Scano
Kathrin Grosse
Hsiao-Ying Lin
Chengfang Fang
Battista Biggio
Fabio Roli
AAML
49
4
0
12 Dec 2022
Carpet-bombing patch: attacking a deep network without usual
  requirements
Carpet-bombing patch: attacking a deep network without usual requirements
Pol Labarbarie
Adrien Chan-Hon-Tong
Stéphane Herbin
Milad Leyli-Abadi
AAML
32
1
0
12 Dec 2022
General Adversarial Defense Against Black-box Attacks via Pixel Level
  and Feature Level Distribution Alignments
General Adversarial Defense Against Black-box Attacks via Pixel Level and Feature Level Distribution Alignments
Xiaogang Xu
Hengshuang Zhao
Philip Torr
Jiaya Jia
AAML
37
2
0
11 Dec 2022
Leveraging Unlabeled Data to Track Memorization
Leveraging Unlabeled Data to Track Memorization
Mahsa Forouzesh
Hanie Sedghi
Patrick Thiran
NoLa
TDI
39
4
0
08 Dec 2022
Enhancing Quantum Adversarial Robustness by Randomized Encodings
Enhancing Quantum Adversarial Robustness by Randomized Encodings
Weiyuan Gong
D. Yuan
Weikang Li
D. Deng
AAML
26
19
0
05 Dec 2022
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial
  Perturbations against Interpretable Deep Learning
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning
Eldor Abdukhamidov
Mohammed Abuhamad
Simon S. Woo
Eric Chan-Tin
Tamer Abuhmed
AAML
36
9
0
29 Nov 2022
Game Theoretic Mixed Experts for Combinational Adversarial Machine
  Learning
Game Theoretic Mixed Experts for Combinational Adversarial Machine Learning
Ethan Rathbun
Kaleel Mahmood
Sohaib Ahmad
Caiwen Ding
Marten van Dijk
AAML
24
4
0
26 Nov 2022
Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning
  Few-Shot Meta-Learners
Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners
E. T. Oldewage
J. Bronskill
Richard Turner
27
3
0
23 Nov 2022
Understanding the Vulnerability of Skeleton-based Human Activity
  Recognition via Black-box Attack
Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack
Yunfeng Diao
He Wang
Tianjia Shao
Yong-Liang Yang
Kun Zhou
David C. Hogg
Meng Wang
AAML
45
7
0
21 Nov 2022
Butterfly Effect Attack: Tiny and Seemingly Unrelated Perturbations for
  Object Detection
Butterfly Effect Attack: Tiny and Seemingly Unrelated Perturbations for Object Detection
N. Doan
Arda Yüksel
Chih-Hong Cheng
AAML
21
1
0
14 Nov 2022
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
Nikolaos Tsilivis
Julia Kempe
AAML
52
18
0
11 Oct 2022
On the Robustness of Deep Clustering Models: Adversarial Attacks and
  Defenses
On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses
Anshuman Chhabra
Ashwin Sekhari
P. Mohapatra
OOD
AAML
45
8
0
04 Oct 2022
Watch What You Pretrain For: Targeted, Transferable Adversarial Examples
  on Self-Supervised Speech Recognition models
Watch What You Pretrain For: Targeted, Transferable Adversarial Examples on Self-Supervised Speech Recognition models
R. Olivier
H. Abdullah
Bhiksha Raj
AAML
26
1
0
17 Sep 2022
On the Transferability of Adversarial Examples between Encrypted Models
On the Transferability of Adversarial Examples between Encrypted Models
Miki Tanaka
Isao Echizen
Hitoshi Kiya
SILM
39
4
0
07 Sep 2022
Adversarial Detection: Attacking Object Detection in Real Time
Adversarial Detection: Attacking Object Detection in Real Time
Han-Ching Wu
Syed Yunas
Sareh Rowlands
Wenjie Ruan
Johan Wahlstrom
AAML
33
4
0
05 Sep 2022
Trace and Detect Adversarial Attacks on CNNs using Feature Response Maps
Trace and Detect Adversarial Attacks on CNNs using Feature Response Maps
Mohammadreza Amirian
Friedhelm Schwenker
Thilo Stadelmann
AAML
27
16
0
24 Aug 2022
Success of Uncertainty-Aware Deep Models Depends on Data Manifold
  Geometry
Success of Uncertainty-Aware Deep Models Depends on Data Manifold Geometry
M. Penrod
Harrison Termotto
Varshini Reddy
Jiayu Yao
Finale Doshi-Velez
Weiwei Pan
AAML
OOD
43
1
0
02 Aug 2022
LGV: Boosting Adversarial Example Transferability from Large Geometric
  Vicinity
LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity
Martin Gubri
Maxime Cordy
Mike Papadakis
Yves Le Traon
Koushik Sen
AAML
35
51
0
26 Jul 2022
Decorrelative Network Architecture for Robust Electrocardiogram
  Classification
Decorrelative Network Architecture for Robust Electrocardiogram Classification
Christopher Wiedeman
Ge Wang
OOD
13
2
0
19 Jul 2022
Adversarial Pixel Restoration as a Pretext Task for Transferable
  Perturbations
Adversarial Pixel Restoration as a Pretext Task for Transferable Perturbations
H. Malik
Shahina Kunhimon
Muzammal Naseer
Salman Khan
Fahad Shahbaz Khan
AAML
33
8
0
18 Jul 2022
Watermark Vaccine: Adversarial Attacks to Prevent Watermark Removal
Watermark Vaccine: Adversarial Attacks to Prevent Watermark Removal
Xinwei Liu
Jian Liu
Yang Bai
Jindong Gu
Tao Chen
Xiaojun Jia
Xiaochun Cao
AAML
WIGM
33
26
0
17 Jul 2022
Provably Adversarially Robust Nearest Prototype Classifiers
Provably Adversarially Robust Nearest Prototype Classifiers
Václav Voráček
Matthias Hein
AAML
25
11
0
14 Jul 2022
Machine Learning Security in Industry: A Quantitative Survey
Machine Learning Security in Industry: A Quantitative Survey
Kathrin Grosse
L. Bieringer
Tarek R. Besold
Battista Biggio
Katharina Krombholz
40
32
0
11 Jul 2022
Statistical Detection of Adversarial examples in Blockchain-based
  Federated Forest In-vehicle Network Intrusion Detection Systems
Statistical Detection of Adversarial examples in Blockchain-based Federated Forest In-vehicle Network Intrusion Detection Systems
I. Aliyu
Sélinde Van Engelenburg
Muhammed Muazu
Jinsul Kim
C. Lim
AAML
43
14
0
11 Jul 2022
Threat Assessment in Machine Learning based Systems
Threat Assessment in Machine Learning based Systems
L. Tidjon
Foutse Khomh
27
17
0
30 Jun 2022
Transferable Graph Backdoor Attack
Transferable Graph Backdoor Attack
Shuiqiao Yang
Bao Gia Doan
Paul Montague
O. Vel
Tamas Abraham
S. Çamtepe
Damith C. Ranasinghe
S. Kanhere
AAML
49
36
0
21 Jun 2022
I Know What You Trained Last Summer: A Survey on Stealing Machine
  Learning Models and Defences
I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
57
106
0
16 Jun 2022
ReFace: Real-time Adversarial Attacks on Face Recognition Systems
ReFace: Real-time Adversarial Attacks on Face Recognition Systems
Shehzeen Samarah Hussain
Todd P. Huster
Chris Mesterharm
Paarth Neekhara
Kevin R. An
Malhar Jere
Harshvardhan Digvijay Sikka
F. Koushanfar
AAML
26
6
0
09 Jun 2022
Pruning has a disparate impact on model accuracy
Pruning has a disparate impact on model accuracy
Cuong Tran
Ferdinando Fioretto
Jung-Eun Kim
Rakshit Naidu
45
38
0
26 May 2022
An Analytic Framework for Robust Training of Artificial Neural Networks
An Analytic Framework for Robust Training of Artificial Neural Networks
R. Barati
Reza Safabakhsh
Mohammad Rahmati
AAML
19
0
0
26 May 2022
Transferable Adversarial Attack based on Integrated Gradients
Transferable Adversarial Attack based on Integrated Gradients
Y. Huang
A. Kong
AAML
40
50
0
26 May 2022
Learn2Weight: Parameter Adaptation against Similar-domain Adversarial
  Attacks
Learn2Weight: Parameter Adaptation against Similar-domain Adversarial Attacks
Siddhartha Datta
AAML
38
4
0
15 May 2022
Do You Think You Can Hold Me? The Real Challenge of Problem-Space
  Evasion Attacks
Do You Think You Can Hold Me? The Real Challenge of Problem-Space Evasion Attacks
Harel Berger
A. Dvir
Chen Hajaj
Rony Ronen
AAML
29
3
0
09 May 2022
A Simple Yet Efficient Method for Adversarial Word-Substitute Attack
A Simple Yet Efficient Method for Adversarial Word-Substitute Attack
Tianle Li
Yi Yang
AAML
32
0
0
07 May 2022
Detecting Backdoor Poisoning Attacks on Deep Neural Networks by Heatmap
  Clustering
Detecting Backdoor Poisoning Attacks on Deep Neural Networks by Heatmap Clustering
Lukas Schulth
Christian Berghoff
Matthias Neu
AAML
25
5
0
27 Apr 2022
A Survey of Robust Adversarial Training in Pattern Recognition:
  Fundamental, Theory, and Methodologies
A Survey of Robust Adversarial Training in Pattern Recognition: Fundamental, Theory, and Methodologies
Zhuang Qian
Kaizhu Huang
Qiufeng Wang
Xu-Yao Zhang
OOD
AAML
ObjD
54
72
0
26 Mar 2022
Improving the Transferability of Targeted Adversarial Examples through
  Object-Based Diverse Input
Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input
Junyoung Byun
Seungju Cho
Myung-Joon Kwon
Heeseon Kim
Changick Kim
AAML
DiffM
29
68
0
17 Mar 2022
One Parameter Defense -- Defending against Data Inference Attacks via
  Differential Privacy
One Parameter Defense -- Defending against Data Inference Attacks via Differential Privacy
Dayong Ye
Sheng Shen
Tianqing Zhu
B. Liu
Wanlei Zhou
MIACV
16
62
0
13 Mar 2022
Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation
Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation
Qilong Zhang
Chaoning Zhang
Chaoning Zhang
Chaoqun Li
Xuanhan Wang
Jingkuan Song
Lianli Gao
AAML
41
21
0
09 Mar 2022
Art-Attack: Black-Box Adversarial Attack via Evolutionary Art
Art-Attack: Black-Box Adversarial Attack via Evolutionary Art
P. Williams
Ke Li
AAML
27
2
0
07 Mar 2022
Adversarial robustness of sparse local Lipschitz predictors
Adversarial robustness of sparse local Lipschitz predictors
Ramchandran Muthukumar
Jeremias Sulam
AAML
34
13
0
26 Feb 2022
StratDef: Strategic Defense Against Adversarial Attacks in ML-based
  Malware Detection
StratDef: Strategic Defense Against Adversarial Attacks in ML-based Malware Detection
Aqib Rashid
Jose Such
AAML
24
6
0
15 Feb 2022
Holistic Adversarial Robustness of Deep Learning Models
Holistic Adversarial Robustness of Deep Learning Models
Pin-Yu Chen
Sijia Liu
AAML
54
16
0
15 Feb 2022
Adversarial Attacks and Defense Methods for Power Quality Recognition
Adversarial Attacks and Defense Methods for Power Quality Recognition
Jiwei Tian
Buhong Wang
Jing Li
Zhen Wang
Mete Ozay
AAML
28
0
0
11 Feb 2022
Red Teaming Language Models with Language Models
Red Teaming Language Models with Language Models
Ethan Perez
Saffron Huang
Francis Song
Trevor Cai
Roman Ring
John Aslanides
Amelia Glaese
Nat McAleese
G. Irving
AAML
13
613
0
07 Feb 2022
Beyond ImageNet Attack: Towards Crafting Adversarial Examples for
  Black-box Domains
Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains
Qilong Zhang
Xiaodan Li
YueFeng Chen
Jingkuan Song
Lianli Gao
Yuan He
Hui Xue
AAML
67
64
0
27 Jan 2022
Security for Machine Learning-based Software Systems: a survey of
  threats, practices and challenges
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges
Huaming Chen
Muhammad Ali Babar
AAML
42
22
0
12 Jan 2022
On Distinctive Properties of Universal Perturbations
On Distinctive Properties of Universal Perturbations
Sung Min Park
K. Wei
Kai Y. Xiao
Jungshian Li
A. Madry
AAML
30
2
0
31 Dec 2021
Closer Look at the Transferability of Adversarial Examples: How They
  Fool Different Models Differently
Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently
Futa Waseda
Sosuke Nishikawa
Trung-Nghia Le
H. Nguyen
Isao Echizen
SILM
36
35
0
29 Dec 2021
Previous
12345678
Next