ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.06430
14
36

Subspace Embedding and Linear Regression with Orlicz Norm

17 June 2018
Alexandr Andoni
Chengyu Lin
Ying Sheng
Peilin Zhong
Ruiqi Zhong
ArXivPDFHTML
Abstract

We consider a generalization of the classic linear regression problem to the case when the loss is an Orlicz norm. An Orlicz norm is parameterized by a non-negative convex function G:R+→R+G:\mathbb{R}_+\rightarrow\mathbb{R}_+G:R+​→R+​ with G(0)=0G(0)=0G(0)=0: the Orlicz norm of a vector x∈Rnx\in\mathbb{R}^nx∈Rn is defined as ∥x∥G=inf⁡{α>0∣∑i=1nG(∣xi∣/α)≤1}. \|x\|_G=\inf\left\{\alpha>0\large\mid\sum_{i=1}^n G(|x_i|/\alpha)\leq 1\right\}. ∥x∥G​=inf{α>0∣∑i=1n​G(∣xi​∣/α)≤1}. We consider the cases where the function G(⋅)G(\cdot)G(⋅) grows subquadratically. Our main result is based on a new oblivious embedding which embeds the column space of a given matrix A∈Rn×dA\in\mathbb{R}^{n\times d}A∈Rn×d with Orlicz norm into a lower dimensional space with ℓ2\ell_2ℓ2​ norm. Specifically, we show how to efficiently find an embedding matrix S∈Rm×n,m<nS\in\mathbb{R}^{m\times n},m<nS∈Rm×n,m<n such that ∀x∈Rd,Ω(1/(dlog⁡n))⋅∥Ax∥G≤∥SAx∥2≤O(d2log⁡n)⋅∥Ax∥G.\forall x\in\mathbb{R}^{d},\Omega(1/(d\log n)) \cdot \|Ax\|_G\leq \|SAx\|_2\leq O(d^2\log n) \cdot \|Ax\|_G.∀x∈Rd,Ω(1/(dlogn))⋅∥Ax∥G​≤∥SAx∥2​≤O(d2logn)⋅∥Ax∥G​. By applying this subspace embedding technique, we show an approximation algorithm for the regression problem min⁡x∈Rd∥Ax−b∥G\min_{x\in\mathbb{R}^d} \|Ax-b\|_Gminx∈Rd​∥Ax−b∥G​, up to a O(dlog⁡2n)O(d\log^2 n)O(dlog2n) factor. As a further application of our techniques, we show how to also use them to improve on the algorithm for the ℓp\ell_pℓp​ low rank matrix approximation problem for 1≤p<21\leq p<21≤p<2.

View on arXiv
Comments on this paper