ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.02272
35
8

Combining Induction and Transduction for Abstract Reasoning

4 November 2024
Wen-Ding Li
Keya Hu
Carter Larsen
Yuqing Wu
Simon Alford
Caleb Woo
Spencer M. Dunn
Hao Tang
Michelangelo Naim
Dat Nguyen
Wei-Long Zheng
Zenna Tavares
Yewen Pu
Kevin Ellis
    AI4CE
ArXivPDFHTML
Abstract

When learning an input-output mapping from very few examples, is it better to first infer a latent function that explains the examples, or is it better to directly predict new test outputs, e.g. using a neural network? We study this question on ARC by training neural models for induction (inferring latent functions) and transduction (directly predicting the test output for a given test input). We train on synthetically generated variations of Python programs that solve ARC training tasks. We find inductive and transductive models solve different kinds of test problems, despite having the same training problems and sharing the same neural architecture: Inductive program synthesis excels at precise computations, and at composing multiple concepts, while transduction succeeds on fuzzier perceptual concepts. Ensembling them approaches human-level performance on ARC.

View on arXiv
Comments on this paper