16
7

SGBA: A Stealthy Scapegoat Backdoor Attack against Deep Neural Networks

Abstract

With the development of Deep Neural Networks (DNNs) and the substantial demand growth of DNN model sharing and reuse, a gap for backdoors remains. A backdoor can be injected into a third-party model and is extremely stealthy in the normal situation, and thus has been widely discussed. Nowadays, the backdoor attack on deep neural networks has become a serious concern, which brings extensive research about attack and defense around backdoors in DNN. In this paper, we propose a stealthy scapegoat backdoor attack that can escape mainstream detection schemes, which can detect the backdoor either in the class level or the model level. We create a scapegoat to mislead the detection schemes in the class level and at the same time make our target model an adversarial input to the detection schemes in the model level. It reveals that although many effective backdoor defense schemes have been put forward, the backdoor attack on DNN still needs to be dealt with. The evaluation results on three benchmark datasets demonstrate that the proposed attack has an excellent performance in both aggressivity and stealthiness against two state-of-the-art defense schemes.

View on arXiv
Comments on this paper