ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.09063
56
12
v1v2v3v4v5v6v7 (latest)

Sparse random tensors: concentration, regularization and applications

20 November 2019
Zhixin Zhou
Yizhe Zhu
ArXiv (abs)PDFHTML
Abstract

We prove a non-asymptotic concentration inequality of sparse inhomogeneous random tensors under the spectral norm. For an order-kkk inhomogeneous random tensor TTT with sparsity pmax⁡≥clog⁡nnp_{\max}\geq \frac{c\log n}{n }pmax​≥nclogn​, we show that ∥T−ET∥=O(npmax⁡log⁡k−2(n))\|T-\mathbb E T\|=O(\sqrt{n p_{\max}}\log^{k-2}(n))∥T−ET∥=O(npmax​​logk−2(n)) with high probability. The optimality of this bound is provided by an information theoretic lower bound. By tensor matricization, we extend the range of sparsity to pmax⁡≥clog⁡nnk−1p_{\max}\geq \frac{c\log n}{n^{k-1}}pmax​≥nk−1clogn​ and obtain ∥T−ET∥=O(nk−1pmax⁡)\|T-\mathbb E T\|=O(\sqrt{n^{k-1} p_{\max}})∥T−ET∥=O(nk−1pmax​​) with high probability. We also provide a simple way to regularize TTT such that O(nk−1pmax⁡)O(\sqrt{n^{k-1}p_{\max}})O(nk−1pmax​​) concentration still holds down to sparsity pmax⁡≥cnk−1p_{\max}\geq \frac{c}{n^{k-1}}pmax​≥nk−1c​. We present our concentration and regularization results with two applications: (i) a randomized construction of hypergraphs of bounded degrees with good expander mixing properties, (ii) concentration of sparsified tensors under uniform sampling.

View on arXiv
Comments on this paper