ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.01961
12
7

Asymmetric Certified Robustness via Feature-Convex Neural Networks

3 February 2023
Samuel Pfrommer
Brendon G. Anderson
Julien Piet
Somayeh Sojoudi
    AAML
ArXivPDFHTML
Abstract

Recent works have introduced input-convex neural networks (ICNNs) as learning models with advantageous training, inference, and generalization properties linked to their convex structure. In this paper, we propose a novel feature-convex neural network architecture as the composition of an ICNN with a Lipschitz feature map in order to achieve adversarial robustness. We consider the asymmetric binary classification setting with one "sensitive" class, and for this class we prove deterministic, closed-form, and easily-computable certified robust radii for arbitrary ℓp\ell_pℓp​-norms. We theoretically justify the use of these models by characterizing their decision region geometry, extending the universal approximation theorem for ICNN regression to the classification setting, and proving a lower bound on the probability that such models perfectly fit even unstructured uniformly distributed data in sufficiently high dimensions. Experiments on Malimg malware classification and subsets of MNIST, Fashion-MNIST, and CIFAR-10 datasets show that feature-convex classifiers attain state-of-the-art certified ℓ1\ell_1ℓ1​-radii as well as substantial ℓ2\ell_2ℓ2​- and ℓ∞\ell_{\infty}ℓ∞​-radii while being far more computationally efficient than any competitive baseline.

View on arXiv
Comments on this paper