ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.05235
24
6

The Fundamental Limits of Interval Arithmetic for Neural Networks

9 December 2021
M. Mirman
Maximilian Baader
Martin Vechev
ArXivPDFHTML
Abstract

Interval analysis (or interval bound propagation, IBP) is a popular technique for verifying and training provably robust deep neural networks, a fundamental challenge in the area of reliable machine learning. However, despite substantial efforts, progress on addressing this key challenge has stagnated, calling into question whether interval arithmetic is a viable path forward. In this paper we present two fundamental results on the limitations of interval arithmetic for analyzing neural networks. Our main impossibility theorem states that for any neural network classifying just three points, there is a valid specification over these points that interval analysis can not prove. Further, in the restricted case of one-hidden-layer neural networks we show a stronger impossibility result: given any radius α<1\alpha < 1α<1, there is a set of O(α−1)O(\alpha^{-1})O(α−1) points with robust radius α\alphaα, separated by distance 222, that no one-hidden-layer network can be proven to classify robustly via interval analysis.

View on arXiv
Comments on this paper