ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.12313
49
13
v1v2v3 (latest)

Adapting Neural Link Predictors for Complex Query Answering

29 January 2023
Erik Arakelyan
Pasquale Minervini
Daniel Daza
ArXiv (abs)PDFHTML
Abstract

Answering complex queries on incomplete knowledge graphs is a challenging task where a model needs to answer complex logical queries in the presence of missing knowledge. Recently, Arakelyan et al. (2021); Minervini et al. (2022) showed that neural link predictors could also be used for answering complex queries: their Continuous Query Decomposition (CQD) method works by decomposing complex queries into atomic sub-queries, answers them using neural link predictors and aggregates their scores via t-norms for ranking the answers to each complex query. However, CQD does not handle negations and only uses the training signal from atomic training queries: neural link prediction scores are not calibrated to interact together via fuzzy logic t-norms during complex query answering. In this work, we propose to address this problem by training a parameter-efficient score adaptation model to re-calibrate neural link prediction scores: this new component is trained on complex queries by back-propagating through the complex query-answering process. Our method, CQDA^{A}A, produces significantly more accurate results than current state-of-the-art methods, improving from 34.434.434.4 to 35.135.135.1 Mean Reciprocal Rank values averaged across all datasets and query types while using ≤35%\leq 35\%≤35% of the available training query types. We further show that CQDA^{A}A is data-efficient, achieving competitive results with only 1%1\%1% of the training data, and robust in out-of-domain evaluations.

View on arXiv
Comments on this paper