ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.03753
118
46
v1v2v3 (latest)

Tell me why! -- Explanations support learning of relational and causal structure

7 December 2021
Andrew Kyle Lampinen
Nicholas A. Roy
Ishita Dasgupta
Stephanie C. Y. Chan
Allison C. Tam
James L. McClelland
Chen Yan
Adam Santoro
Neil C. Rabinowitz
Jane X. Wang
Felix Hill
ArXiv (abs)PDFHTML
Abstract

Explanations play a considerable role in human learning, especially in areas thatremain major challenges for AI -- forming abstractions, and learning about the re-lational and causal structure of the world. Here, we explore whether reinforcement learning agents might likewise benefit from explanations. We outline a family of relational tasks that involve selecting an object that is the odd one out in a set (i.e., unique along one of many possible feature dimensions). Odd-one-out tasks require agents to reason over multi-dimensional relationships among a set of objects. We show that agents do not learn these tasks well from reward alone, but achieve >90% performance when they are also trained to generate language explaining object properties or why a choice is correct or incorrect. In further experiments, we show how predicting explanations enables agents to generalize appropriately from ambiguous, causally-confounded training, and even to meta-learn to perform experimental interventions to identify causal structure. We show that explanations help overcome the tendency of agents to fixate on simple features, and explore which aspects of explanations make them most beneficial. Our results suggest that learning from explanations is a powerful principle that could offer a promising path towards training more robust and general machine learning systems.

View on arXiv
Comments on this paper