ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.10742
11
2

Robustness of ML-Enhanced IDS to Stealthy Adversaries

21 April 2021
Vance Wong
John A. Emanuello
    AAML
ArXivPDFHTML
Abstract

Intrusion Detection Systems (IDS) enhanced with Machine Learning (ML) have demonstrated the capacity to efficiently build a prototype of "normal" cyber behaviors in order to detect cyber threats' activity with greater accuracy than traditional rule-based IDS. Because these are largely black boxes, their acceptance requires proof of robustness to stealthy adversaries. Since it is impossible to build a baseline from activity completely clean of that of malicious cyber actors (outside of controlled experiments), the training data for deployed models will be poisoned with examples of activity that analysts would want to be alerted about. We train an autoencoder-based anomaly detection system on network activity with various proportions of malicious activity mixed in and demonstrate that they are robust to this sort of poisoning.

View on arXiv
Comments on this paper