ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.00759
53
3

Enhancing Transformers for Generalizable First-Order Logical Entailment

1 January 2025
Tianshi Zheng
Jiazheng Wang
Ziyi Wang
Jiaxin Bai
hang Yin
Zheye Deng
Yangqiu Song
Jianxin Li
    NAI
    LRM
ArXivPDFHTML
Abstract

Transformers, as the fundamental deep learning architecture, have demonstrated great capability in reasoning. This paper studies the generalizable first-order logical reasoning ability of transformers with their parameterized knowledge and how to improve it. Transformers' capability of first-order reasoning is further captured by whether they can conduct first-order logical entailment, which is quantitatively measured by their performance in answering knowledge graph queries. We establish the connections between (1) two types of distribution shifts studied in out-of-distribution generalization and (2) unseen knowledge and query settings discussed in the task of knowledge graph query answering, which makes it possible to characterize the fine-grained generalizability. Results on our comprehensive dataset showed that transformers outperform previous methods designed particularly for this task and provided detailed empirical evidence about the impact of the input query syntax, token embedding, and transformer architectures on the reasoning capability of transformers. Interestingly, our results revealed the mismatch of positional encoding and other design choices of transformer architectures in previous practices. Motivated by this, we propose TEGA, a logic-aware architecture that significantly improves the performance in generalizable first-order logical entailment.

View on arXiv
@article{zheng2025_2501.00759,
  title={ Enhancing Transformers for Generalizable First-Order Logical Entailment },
  author={ Tianshi Zheng and Jiazheng Wang and Zihao Wang and Jiaxin Bai and Hang Yin and Zheye Deng and Yangqiu Song and Jianxin Li },
  journal={arXiv preprint arXiv:2501.00759},
  year={ 2025 }
}
Comments on this paper