ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.13875
19
11

Fault Tolerance of Neural Networks in Adversarial Settings

30 October 2019
Vasisht Duddu
N. Pillai
D. V. Rao
V. Balas
    SILM
    AAML
ArXivPDFHTML
Abstract

Artificial Intelligence systems require a through assessment of different pillars of trust, namely, fairness, interpretability, data and model privacy, reliability (safety) and robustness against against adversarial attacks. While these research problems have been extensively studied in isolation, an understanding of the trade-off between different pillars of trust is lacking. To this extent, the trade-off between fault tolerance, privacy and adversarial robustness is evaluated for the specific case of Deep Neural Networks, by considering two adversarial settings under a security and a privacy threat model. Specifically, this work studies the impact of the fault tolerance of the Neural Network on training the model by adding noise to the input (Adversarial Robustness) and noise to the gradients (Differential Privacy). While training models with noise to inputs, gradients or weights enhances fault tolerance, it is observed that adversarial robustness and fault tolerance are at odds with each other. On the other hand, (ϵ,δ\epsilon,\deltaϵ,δ)-Differentially Private models enhance the fault tolerance, measured using generalisation error, theoretically has an upper bound of eϵ−1+δe^{\epsilon} - 1 + \deltaeϵ−1+δ. This novel study of the trade-off between different elements of trust is pivotal for training a model which satisfies the requirements for different pillars of trust simultaneously.

View on arXiv
Comments on this paper