ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.14313
52
0

A MIND for Reasoning: Meta-learning for In-context Deduction

20 May 2025
Leonardo Bertolazzi
Manuel Vargas Guzmán
Raffaella Bernardi
Maciej Malicki
Jakub Szymanik
    ReLMLRM
ArXiv (abs)PDFHTML
Main:7 Pages
25 Figures
Bibliography:3 Pages
8 Tables
Appendix:18 Pages
Abstract

Large language models (LLMs) are increasingly evaluated on formal tasks, where strong reasoning abilities define the state of the art. However, their ability to generalize to out-of-distribution problems remains limited. In this paper, we investigate how LLMs can achieve a systematic understanding of deductive rules. Our focus is on the task of identifying the appropriate subset of premises within a knowledge base needed to derive a given hypothesis. To tackle this challenge, we propose Meta-learning for In-context Deduction (MIND), a novel few-shot meta-learning fine-tuning approach. The goal of MIND is to enable models to generalize more effectively to unseen knowledge bases and to systematically apply inference rules. Our results show that MIND significantly improves generalization in small LMs ranging from 1.5B to 7B parameters. The benefits are especially pronounced in smaller models and low-data settings. Remarkably, small models fine-tuned with MIND outperform state-of-the-art LLMs, such as GPT-4o and o3-mini, on this task.

View on arXiv
@article{bertolazzi2025_2505.14313,
  title={ A MIND for Reasoning: Meta-learning for In-context Deduction },
  author={ Leonardo Bertolazzi and Manuel Vargas Guzmán and Raffaella Bernardi and Maciej Malicki and Jakub Szymanik },
  journal={arXiv preprint arXiv:2505.14313},
  year={ 2025 }
}
Comments on this paper