Detecting Universal Trigger's Adversarial Attack with Honeypot
The Universal Trigger (UniTrigger) is a recently-proposed powerful adversarial textual attack method. Utilizing a learning-based mechanism, UniTrigger generate a fixed phrase that, when added to any benign inputs, can drop the prediction accuracy of a textual neural network (NN) model to near zero on a target class. To defend against this new attack method that can cause significant harm, in this paper, we borrow the "honeypot" concept from the cybersecurity community and propose DARCY, a honeypot-based defense framework. DARCY adaptively searches and injects multiple trapdoors into an NN model to "bait and catch" potential attacks. Through comprehensive experiments across four public datasets, we demonstrate that DARCY detects UniTrigger's adversarial attacks with up to 99% TPR and less than 1% FPR in most cases, while losing the prediction accuracy (in F1) for clean inputs by less than 1% on average. We finally show that DARCY with multiple trapdoors is also robust under different assumptions concerning attackers' knowledge and skills.
View on arXiv