ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.01763
  4. Cited By
TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan
  Backdoors in AI Systems

TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems

2 August 2019
Wenbo Guo
Lun Wang
Masashi Sugiyama
Min Du
D. Song
ArXivPDFHTML

Papers citing "TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems"

16 / 66 papers shown
Title
Practical Detection of Trojan Neural Networks: Data-Limited and
  Data-Free Cases
Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases
Ren Wang
Gaoyuan Zhang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
Meng Wang
AAML
33
148
0
31 Jul 2020
Cassandra: Detecting Trojaned Networks from Adversarial Perturbations
Cassandra: Detecting Trojaned Networks from Adversarial Perturbations
Xiaoyu Zhang
Ajmal Mian
Rohit Gupta
Nazanin Rahnavard
M. Shah
AAML
30
26
0
28 Jul 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
220
0
21 Jul 2020
Backdoor Learning: A Survey
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
45
589
0
17 Jul 2020
Odyssey: Creation, Analysis and Detection of Trojan Models
Odyssey: Creation, Analysis and Detection of Trojan Models
Marzieh Edraki
Nazmul Karim
Nazanin Rahnavard
Ajmal Mian
M. Shah
AAML
28
13
0
16 Jul 2020
Backdoor Attacks Against Deep Learning Systems in the Physical World
Backdoor Attacks Against Deep Learning Systems in the Physical World
Emily Wenger
Josephine Passananti
A. Bhagoji
Yuanshun Yao
Haitao Zheng
Ben Y. Zhao
AAML
31
200
0
25 Jun 2020
Backdoor Attacks to Graph Neural Networks
Backdoor Attacks to Graph Neural Networks
Zaixi Zhang
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
GNN
24
211
0
19 Jun 2020
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural
  Networks
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks
Ruixiang Tang
Mengnan Du
Ninghao Liu
Fan Yang
Xia Hu
AAML
23
184
0
15 Jun 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
46
298
0
08 May 2020
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural
  Networks
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks
Junfeng Guo
Zelun Kong
Cong Liu
AAML
24
1
0
24 Mar 2020
Revealing Perceptible Backdoors, without the Training Set, via the
  Maximum Achievable Misclassification Fraction Statistic
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic
Zhen Xiang
David J. Miller
Hang Wang
G. Kesidis
AAML
34
9
0
18 Nov 2019
NeuronInspect: Detecting Backdoors in Neural Networks via Output
  Explanations
NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations
Xijie Huang
M. Alzantot
Mani B. Srivastava
AAML
17
103
0
18 Nov 2019
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems
  With Limited Data
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems With Limited Data
Xinyun Chen
Wenxiao Wang
Chris Bender
Yiming Ding
R. Jia
Bo-wen Li
D. Song
AAML
27
107
0
17 Nov 2019
Detection of Backdoors in Trained Classifiers Without Access to the
  Training Set
Detection of Backdoors in Trained Classifiers Without Access to the Training Set
Zhen Xiang
David J. Miller
G. Kesidis
AAML
22
23
0
27 Aug 2019
Februus: Input Purification Defense Against Trojan Attacks on Deep
  Neural Network Systems
Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems
Bao Gia Doan
Ehsan Abbasnejad
Damith C. Ranasinghe
AAML
27
66
0
09 Aug 2019
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
173
287
0
02 Dec 2018
Previous
12