ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.02568
80
5

Less is More: Understanding Word-level Textual Adversarial Attack via n-gram Frequency Descend

6 February 2023
Ning Lu
Shengcai Liu
Zhirui Zhang
Qi. Wang
Haifeng Liu
Jiaheng Zhang
    AAML
ArXivPDFHTML
Abstract

Word-level textual adversarial attacks have demonstrated notable efficacy in misleading Natural Language Processing (NLP) models. Despite their success, the underlying reasons for their effectiveness and the fundamental characteristics of adversarial examples (AEs) remain obscure. This work aims to interpret word-level attacks by examining their nnn-gram frequency patterns. Our comprehensive experiments reveal that in approximately 90\% of cases, word-level attacks lead to the generation of examples where the frequency of nnn-grams decreases, a tendency we term as the nnn-gram Frequency Descend (nnn-FD). This finding suggests a straightforward strategy to enhance model robustness: training models using examples with nnn-FD. To examine the feasibility of this strategy, we employed the nnn-gram frequency information, as an alternative to conventional loss gradients, to generate perturbed examples in adversarial training. The experiment results indicate that the frequency-based approach performs comparably with the gradient-based approach in improving model robustness. Our research offers a novel and more intuitive perspective for understanding word-level textual adversarial attacks and proposes a new direction to improve model robustness.

View on arXiv
Comments on this paper