ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.07072
16
4

The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning

13 September 2023
Alexander Bastounis
Alexander N. Gorban
Anders C. Hansen
D. Higham
Danil Prokhorov
Oliver J. Sutton
I. Tyukin
Qinghua Zhou
    OOD
ArXivPDFHTML
Abstract

In this work, we assess the theoretical limitations of determining guaranteed stability and accuracy of neural networks in classification tasks. We consider classical distribution-agnostic framework and algorithms minimising empirical risks and potentially subjected to some weights regularisation. We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks in the above settings is extremely challenging, if at all possible, even when such ideal solutions exist within the given class of neural architectures.

View on arXiv
Comments on this paper