ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.13972
26
25

Balancing Gaussian vectors in high dimension

30 October 2019
Paxton Turner
Raghu Meka
Philippe Rigollet
ArXivPDFHTML
Abstract

Motivated by problems in controlled experiments, we study the discrepancy of random matrices with continuous entries where the number of columns nnn is much larger than the number of rows mmm. Our first result shows that if ω(1)=m=o(n)\omega(1) = m = o(n)ω(1)=m=o(n), a matrix with i.i.d. standard Gaussian entries has discrepancy Θ(n 2−n/m)\Theta(\sqrt{n} \, 2^{-n/m})Θ(n​2−n/m) with high probability. This provides sharp guarantees for Gaussian discrepancy in a regime that had not been considered before in the existing literature. Our results also apply to a more general family of random matrices with continuous i.i.d entries, assuming that m=O(n/log⁡n)m = O(n/\log{n})m=O(n/logn). The proof is non-constructive and is an application of the second moment method. Our second result is algorithmic and applies to random matrices whose entries are i.i.d. and have a Lipschitz density. We present a randomized polynomial-time algorithm that achieves discrepancy e−Ω(log⁡2(n)/m)e^{-\Omega(\log^2(n)/m)}e−Ω(log2(n)/m) with high probability, provided that m=O(log⁡n)m = O(\sqrt{\log{n}})m=O(logn​). In the one-dimensional case, this matches the best known algorithmic guarantees due to Karmarkar--Karp. For higher dimensions 2≤m=O(log⁡n)2 \leq m = O(\sqrt{\log{n}})2≤m=O(logn​), this establishes the first efficient algorithm achieving discrepancy smaller than O(m)O( \sqrt{m} )O(m​).

View on arXiv
Comments on this paper