ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.12313
11
13

Adapting Neural Link Predictors for Data-Efficient Complex Query Answering

29 January 2023
Erik Arakelyan
Pasquale Minervini
Daniel Daza
Michael Cochez
Isabelle Augenstein
ArXivPDFHTML
Abstract

Answering complex queries on incomplete knowledge graphs is a challenging task where a model needs to answer complex logical queries in the presence of missing knowledge. Prior work in the literature has proposed to address this problem by designing architectures trained end-to-end for the complex query answering task with a reasoning process that is hard to interpret while requiring data and resource-intensive training. Other lines of research have proposed re-using simple neural link predictors to answer complex queries, reducing the amount of training data by orders of magnitude while providing interpretable answers. The neural link predictor used in such approaches is not explicitly optimised for the complex query answering task, implying that its scores are not calibrated to interact together. We propose to address these problems via CQDA^{\mathcal{A}}A, a parameter-efficient score \emph{adaptation} model optimised to re-calibrate neural link prediction scores for the complex query answering task. While the neural link predictor is frozen, the adaptation component -- which only increases the number of model parameters by 0.03%0.03\%0.03% -- is trained on the downstream complex query answering task. Furthermore, the calibration component enables us to support reasoning over queries that include atomic negations, which was previously impossible with link predictors. In our experiments, CQDA^{\mathcal{A}}A produces significantly more accurate results than current state-of-the-art methods, improving from 34.434.434.4 to 35.135.135.1 Mean Reciprocal Rank values averaged across all datasets and query types while using ≤30%\leq 30\%≤30% of the available training query types. We further show that CQDA^{\mathcal{A}}A is data-efficient, achieving competitive results with only 1%1\%1% of the training complex queries, and robust in out-of-domain evaluations.

View on arXiv
Comments on this paper