ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.10106
6
0

Developing and Defeating Adversarial Examples

23 August 2020
Ian McDiarmid-Sterling
Allan Moser
    AAML
ArXivPDFHTML
Abstract

Breakthroughs in machine learning have resulted in state-of-the-art deep neural networks (DNNs) performing classification tasks in safety-critical applications. Recent research has demonstrated that DNNs can be attacked through adversarial examples, which are small perturbations to input data that cause the DNN to misclassify objects. The proliferation of DNNs raises important safety concerns about designing systems that are robust to adversarial examples. In this work we develop adversarial examples to attack the Yolo V3 object detector [1] and then study strategies to detect and neutralize these examples. Python code for this project is available at https://github.com/ianmcdiarmidsterling/adversarial

View on arXiv
Comments on this paper