ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.10409
50
9
v1v2 (latest)

Representation Learning with Weighted Inner Product for Universal Approximation of General Similarities

27 February 2019
Geewook Kim
Akifumi Okuno
Kazuki Fukui
Hidetoshi Shimodaira
ArXiv (abs)PDFHTML
Abstract

We propose weighted inner product similarity\textit{weighted inner product similarity}weighted inner product similarity (WIPS) for neural-network based graph embedding, where we optimize the weights of the inner product in addition to the parameters of neural networks. Despite its simplicity, WIPS can approximate arbitrary general similarities including positive definite, conditionally positive definite, and indefinite kernels. WIPS is free from similarity model selection, yet it can learn any similarity models such as cosine similarity, negative Poincar\'e distance and negative Wasserstein distance. Our extensive experiments show that the proposed method can learn high-quality distributed representations of nodes from real datasets, leading to an accurate approximation of similarities as well as high performance in inductive tasks.

View on arXiv
Comments on this paper