ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.07970
  4. Cited By
How Vulnerable Are Automatic Fake News Detection Methods to Adversarial
  Attacks?

How Vulnerable Are Automatic Fake News Detection Methods to Adversarial Attacks?

16 July 2021
Camille Koenders
Johannes Filla
Nicolai Schneider
Vinicius Woloszyn
    GNN
ArXivPDFHTML

Papers citing "How Vulnerable Are Automatic Fake News Detection Methods to Adversarial Attacks?"

2 / 2 papers shown
Title
Adversarial Style Augmentation via Large Language Model for Robust Fake News Detection
Adversarial Style Augmentation via Large Language Model for Robust Fake News Detection
Sungwon Park
Sungwon Han
Xing Xie
Jae-Gil Lee
Meeyoung Cha
61
1
0
17 Jun 2024
Adversarial Attacks and Defenses for Social Network Text Processing
  Applications: Techniques, Challenges and Future Research Directions
Adversarial Attacks and Defenses for Social Network Text Processing Applications: Techniques, Challenges and Future Research Directions
I. Alsmadi
Kashif Ahmad
Mahmoud Nazzal
Firoj Alam
Ala I. Al-Fuqaha
Abdallah Khreishah
A. Algosaibi
AAML
27
16
0
26 Oct 2021
1