ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.11963
186
1
v1v2v3 (latest)

Sketch-based Randomized Algorithms for Dynamic Graph Regression

Applied Mathematics and Computation (Appl. Math. Comput.), 2019
28 May 2019
M. H. Chehreghani
ArXiv (abs)PDFHTML
Abstract

A well-known problem in data science and machine learning is {\em linear regression}, which is recently extended to dynamic graphs. Existing exact algorithms for updating the solution of dynamic graph regression problem require at least a linear time (in terms of nnn: the size of the graph). However, this time complexity might be intractable in practice. In the current paper, we utilize {\em subsampled randomized Hadamard transform} and \textsf{CountSketch} to propose the first randomized algorithms. Suppose that we are given an n×mn\times mn×m matrix embedding MMM of the graph, where m≪nm \ll nm≪n. Let rrr be the number of samples required for a guaranteed approximation error, which is a sublinear function of nnn. Our first algorithm reduces time complexity of pre-processing to O(n(m+1)+2n(m+1)log⁡2(r+1)+rm2)O(n(m + 1) + 2n(m + 1) \log_2(r + 1) + rm^2)O(n(m+1)+2n(m+1)log2​(r+1)+rm2). Then after an edge insertion or an edge deletion, it updates the approximate solution in O(rm)O(rm)O(rm) time. Our second algorithm reduces time complexity of pre-processing to O(nnz(M)+m3ϵ−2log⁡7(m/ϵ))O \left( nnz(M) + m^3 \epsilon^{-2} \log^7(m/\epsilon) \right)O(nnz(M)+m3ϵ−2log7(m/ϵ)), where nnz(M)nnz(M)nnz(M) is the number of nonzero elements of MMM. Then after an edge insertion or an edge deletion or a node insertion or a node deletion, it updates the approximate solution in O(qm)O(qm)O(qm) time, with q=O(m2ϵ2log⁡6(m/ϵ))q=O\left(\frac{m^2}{\epsilon^2} \log^6(m/\epsilon) \right)q=O(ϵ2m2​log6(m/ϵ)). Finally, we show that under some assumptions, if ln⁡n<ϵ−1\ln n < \epsilon^{-1}lnn<ϵ−1 our first algorithm outperforms our second algorithm and if ln⁡n≥ϵ−1\ln n \geq \epsilon^{-1}lnn≥ϵ−1 our second algorithm outperforms our first algorithm.

View on arXiv
Comments on this paper