ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15174
12
0

Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss

21 May 2025
Bo-Han Lai
Pin-Han Huang
Bo-Han Kung
Shang-Tse Chen
ArXivPDFHTML
Abstract

Lipschitz neural networks are well-known for providing certified robustness in deep learning. In this paper, we present a novel, efficient Block Reflector Orthogonal (BRO) layer that enhances the capability of orthogonal layers on constructing more expressive Lipschitz neural architectures. In addition, by theoretically analyzing the nature of Lipschitz neural networks, we introduce a new loss function that employs an annealing mechanism to increase margin for most data points. This enables Lipschitz models to provide better certified robustness. By employing our BRO layer and loss function, we design BRONet - a simple yet effective Lipschitz neural network that achieves state-of-the-art certified robustness. Extensive experiments and empirical analysis on CIFAR-10/100, Tiny-ImageNet, and ImageNet validate that our method outperforms existing baselines. The implementation is available at \href{this https URL}{this https URL}.

View on arXiv
@article{lai2025_2505.15174,
  title={ Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss },
  author={ Bo-Han Lai and Pin-Han Huang and Bo-Han Kung and Shang-Tse Chen },
  journal={arXiv preprint arXiv:2505.15174},
  year={ 2025 }
}
Comments on this paper