ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.10821
51
9

Statistical Testing under Distributional Shifts

22 May 2021
Nikolaj Thams
Sorawit Saengkyongam
Niklas Pfister
J. Peters
    OOD
ArXivPDFHTML
Abstract

In this work, we introduce statistical testing under distributional shifts. We are interested in the hypothesis P∗∈H0P^* \in H_0P∗∈H0​ for a target distribution P∗P^*P∗, but observe data from a different distribution Q∗Q^*Q∗. We assume that P∗P^*P∗ is related to Q∗Q^*Q∗ through a known shift τ\tauτ and formally introduce hypothesis testing in this setting. We propose a general testing procedure that first resamples from the observed data to construct an auxiliary data set and then applies an existing test in the target domain. We prove that if the size of the resample is at most o(n)o(\sqrt{n})o(n​) and the resampling weights are well-behaved, this procedure inherits the pointwise asymptotic level and power from the target test. If the map τ\tauτ is estimated from data, we can maintain the above guarantees under mild conditions if the estimation works sufficiently well. We further extend our results to finite sample level, uniform asymptotic level and a different resampling scheme. Testing under distributional shifts allows us to tackle a diverse set of problems. We argue that it may prove useful in reinforcement learning and covariate shift, we show how it reduces conditional to unconditional independence testing and we provide example applications in causal inference.

View on arXiv
Comments on this paper