ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.08554
  4. Cited By
Gotta Catch Ém All: Using Honeypots to Catch Adversarial Attacks on
  Neural Networks

Gotta Catch Ém All: Using Honeypots to Catch Adversarial Attacks on Neural Networks

18 April 2019
Shawn Shan
Emily Wenger
Bolun Wang
Yangqiu Song
Haitao Zheng
Ben Y. Zhao
ArXivPDFHTML

Papers citing "Gotta Catch Ém All: Using Honeypots to Catch Adversarial Attacks on Neural Networks"

19 / 19 papers shown
Title
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks
Zhenxing Niu
Yuyao Sun
Qiguang Miao
Rong Jin
Gang Hua
AAML
46
6
0
28 May 2024
A LLM Assisted Exploitation of AI-Guardian
A LLM Assisted Exploitation of AI-Guardian
Nicholas Carlini
ELM
SILM
24
15
0
20 Jul 2023
Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models
Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models
Shawn Shan
Jenna Cryan
Emily Wenger
Haitao Zheng
Rana Hanocka
Ben Y. Zhao
WIGM
17
177
0
08 Feb 2023
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
33
75
0
29 Dec 2022
Increasing Confidence in Adversarial Robustness Evaluations
Increasing Confidence in Adversarial Robustness Evaluations
Roland S. Zimmermann
Wieland Brendel
Florian Tramèr
Nicholas Carlini
AAML
41
16
0
28 Jun 2022
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
94
50
0
13 Oct 2021
Multi-Trigger-Key: Towards Multi-Task Privacy Preserving In Deep
  Learning
Multi-Trigger-Key: Towards Multi-Task Privacy Preserving In Deep Learning
Ren Wang
Zhe Xu
Alfred Hero
32
0
0
06 Oct 2021
Quantization Backdoors to Deep Learning Commercial Frameworks
Quantization Backdoors to Deep Learning Commercial Frameworks
Hua Ma
Huming Qiu
Yansong Gao
Zhi-Li Zhang
A. Abuadbba
Minhui Xue
Anmin Fu
Jiliang Zhang
S. Al-Sarawi
Derek Abbott
MQ
38
19
0
20 Aug 2021
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine
  learning-based genomic analysis
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine learning-based genomic analysis
Esha Sarkar
Michail Maniatakos
32
3
0
14 Aug 2021
Evaluating the Robustness of Trigger Set-Based Watermarks Embedded in
  Deep Neural Networks
Evaluating the Robustness of Trigger Set-Based Watermarks Embedded in Deep Neural Networks
Suyoung Lee
Wonho Song
Suman Jana
M. Cha
Sooel Son
AAML
27
13
0
18 Jun 2021
RBNN: Memory-Efficient Reconfigurable Deep Binary Neural Network with IP
  Protection for Internet of Things
RBNN: Memory-Efficient Reconfigurable Deep Binary Neural Network with IP Protection for Internet of Things
Huming Qiu
Hua Ma
Zhi-Li Zhang
Yifeng Zheng
Anmin Fu
Pan Zhou
Yansong Gao
Derek Abbott
S. Al-Sarawi
MQ
19
9
0
09 May 2021
Hidden Backdoors in Human-Centric Language Models
Hidden Backdoors in Human-Centric Language Models
Shaofeng Li
Hui Liu
Tian Dong
Benjamin Zi Hao Zhao
Minhui Xue
Haojin Zhu
Jialiang Lu
SILM
35
147
0
01 May 2021
"What's in the box?!": Deflecting Adversarial Attacks by Randomly
  Deploying Adversarially-Disjoint Models
"What's in the box?!": Deflecting Adversarial Attacks by Randomly Deploying Adversarially-Disjoint Models
Sahar Abdelnabi
Mario Fritz
AAML
27
7
0
09 Feb 2021
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
221
0
21 Jul 2020
Backdoor Learning: A Survey
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
45
590
0
17 Jul 2020
Hidden Trigger Backdoor Attacks
Hidden Trigger Backdoor Attacks
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
24
613
0
30 Sep 2019
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Soheil Kolouri
Aniruddha Saha
Hamed Pirsiavash
Heiko Hoffmann
AAML
30
231
0
26 Jun 2019
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
298
3,113
0
04 Nov 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
326
5,847
0
08 Jul 2016
1