ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.04062
30
54

Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100

5 August 2021
Sahil Singla
Surbhi Singla
S. Feizi
    AAML
ArXivPDFHTML
Abstract

Training convolutional neural networks (CNNs) with a strict Lipschitz constraint under the l2l_{2}l2​ norm is useful for provable adversarial robustness, interpretable gradients and stable training. While 111-Lipschitz CNNs can be designed by enforcing a 111-Lipschitz constraint on each layer, training such networks requires each layer to have an orthogonal Jacobian matrix (for all inputs) to prevent the gradients from vanishing during backpropagation. A layer with this property is said to be Gradient Norm Preserving (GNP). In this work, we introduce a procedure to certify the robustness of 111-Lipschitz CNNs by relaxing the orthogonalization of the last linear layer of the network that significantly advances the state of the art for both standard and provable robust accuracies on CIFAR-100 (gains of 4.80%4.80\%4.80% and 4.71%4.71\%4.71%, respectively). We further boost their robustness by introducing (i) a novel Gradient Norm preserving activation function called the Householder activation function (that includes every GroupSort\mathrm{GroupSort}GroupSort activation) and (ii) a certificate regularization. On CIFAR-10, we achieve significant improvements over prior works in provable robust accuracy (5.81%5.81\%5.81%) with only a minor drop in standard accuracy (−0.29%-0.29\%−0.29%). Code for reproducing all experiments in the paper is available at \url{https://github.com/singlasahil14/SOC}.

View on arXiv
Comments on this paper