ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.00487
36
0

Analysis of the vulnerability of machine learning regression models to adversarial attacks using data from 5G wireless networks

1 May 2025
Leonid Legashev
A. Zhigalov
Denis Parfenov
    AAML
ArXivPDFHTML
Abstract

This article describes the process of creating a script and conducting an analytical study of a dataset using the DeepMIMO emulator. An advertorial attack was carried out using the FGSM method to maximize the gradient. A comparison is made of the effectiveness of binary classifiers in the task of detecting distorted data. The dynamics of changes in the quality indicators of the regression model were analyzed in conditions without adversarial attacks, during an adversarial attack and when the distorted data was isolated. It is shown that an adversarial FGSM attack with gradient maximization leads to an increase in the value of the MSE metric by 33% and a decrease in the R2 indicator by 10% on average. The LightGBM binary classifier effectively identifies data with adversarial anomalies with 98% accuracy. Regression machine learning models are susceptible to adversarial attacks, but rapid analysis of network traffic and data transmitted over the network makes it possible to identify malicious activity

View on arXiv
@article{legashev2025_2505.00487,
  title={ Analysis of the vulnerability of machine learning regression models to adversarial attacks using data from 5G wireless networks },
  author={ Leonid Legashev and Artur Zhigalov and Denis Parfenov },
  journal={arXiv preprint arXiv:2505.00487},
  year={ 2025 }
}
Comments on this paper