ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.03686
14
1

Adversarial Robustness of Deep Convolutional Candlestick Learner

29 May 2020
Jun-Hao Chen
Samuel Yen-Chi Chen
Yun-Cheng Tsai
Chih-Shiang Shur
    AAML
ArXivPDFHTML
Abstract

Deep learning (DL) has been applied extensively in a wide range of fields. However, it has been shown that DL models are susceptible to a certain kinds of perturbations called \emph{adversarial attacks}. To fully unlock the power of DL in critical fields such as financial trading, it is necessary to address such issues. In this paper, we present a method of constructing perturbed examples and use these examples to boost the robustness of the model. Our algorithm increases the stability of DL models for candlestick classification with respect to perturbations in the input data.

View on arXiv
Comments on this paper