ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.07677
23
15

Fast Differentiable Clipping-Aware Normalization and Rescaling

15 July 2020
Jonas Rauber
Matthias Bethge
ArXivPDFHTML
Abstract

Rescaling a vector δ⃗∈Rn\vec{\delta} \in \mathbb{R}^nδ∈Rn to a desired length is a common operation in many areas such as data science and machine learning. When the rescaled perturbation ηδ⃗\eta \vec{\delta}ηδ is added to a starting point x⃗∈D\vec{x} \in Dx∈D (where DDD is the data domain, e.g. D=[0,1]nD = [0, 1]^nD=[0,1]n), the resulting vector v⃗=x⃗+ηδ⃗\vec{v} = \vec{x} + \eta \vec{\delta}v=x+ηδ will in general not be in DDD. To enforce that the perturbed vector vvv is in DDD, the values of v⃗\vec{v}v can be clipped to DDD. This subsequent element-wise clipping to the data domain does however reduce the effective perturbation size and thus interferes with the rescaling of δ⃗\vec{\delta}δ. The optimal rescaling η\etaη to obtain a perturbation with the desired norm after the clipping can be iteratively approximated using a binary search. However, such an iterative approach is slow and non-differentiable. Here we show that the optimal rescaling can be found analytically using a fast and differentiable algorithm. Our algorithm works for any p-norm and can be used to train neural networks on inputs with normalized perturbations. We provide native implementations for PyTorch, TensorFlow, JAX, and NumPy based on EagerPy.

View on arXiv
Comments on this paper