ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.14276
  4. Cited By
Don't throw the baby out with the bathwater: How and why deep learning for ARC

Don't throw the baby out with the bathwater: How and why deep learning for ARC

17 June 2025
Jack Cole
Mohamed Osman
    LRM
ArXiv (abs)PDFHTML

Papers citing "Don't throw the baby out with the bathwater: How and why deep learning for ARC"

13 / 13 papers shown
Title
How Does Code Pretraining Affect Language Model Task Performance?
How Does Code Pretraining Affect Language Model Task Performance?
Jackson Petty
Sjoerd van Steenkiste
Tal Linzen
99
13
0
06 Sep 2024
Neural networks for abstraction and reasoning: Towards broad
  generalization in machines
Neural networks for abstraction and reasoning: Towards broad generalization in machines
Mikel Bober-Irizar
Soumya Banerjee
AI4CELRMNAI
72
10
0
05 Feb 2024
General-Purpose In-Context Learning by Meta-Learning Transformers
General-Purpose In-Context Learning by Meta-Learning Transformers
Louis Kirsch
James Harrison
Jascha Narain Sohl-Dickstein
Luke Metz
101
78
0
08 Dec 2022
Language Models of Code are Few-Shot Commonsense Learners
Language Models of Code are Few-Shot Commonsense Learners
Aman Madaan
Shuyan Zhou
Uri Alon
Yiming Yang
Graham Neubig
ReLMLRM
113
221
0
13 Oct 2022
Competition-Level Code Generation with AlphaCode
Competition-Level Code Generation with AlphaCode
Yujia Li
David Choi
Junyoung Chung
Nate Kushman
Julian Schrittwieser
...
Esme Sutherland Robson
Pushmeet Kohli
Nando de
Koray Kavukcuoglu
Oriol Vinyals
143
1,403
0
08 Feb 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&RoLRMAI4CEReLM
820
9,576
0
28 Jan 2022
LongT5: Efficient Text-To-Text Transformer for Long Sequences
LongT5: Efficient Text-To-Text Transformer for Long Sequences
Mandy Guo
Joshua Ainslie
David C. Uthus
Santiago Ontanon
Jianmo Ni
Yun-hsuan Sung
Yinfei Yang
VLM
65
316
0
15 Dec 2021
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
667
41,369
0
22 Oct 2020
The Pitfalls of Simplicity Bias in Neural Networks
The Pitfalls of Simplicity Bias in Neural Networks
Harshay Shah
Kaustav Tamuly
Aditi Raghunathan
Prateek Jain
Praneeth Netrapalli
AAML
69
361
0
13 Jun 2020
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
608
4,893
0
23 Jan 2020
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
450
20,298
0
23 Oct 2019
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness
  of MAML
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
Aniruddh Raghu
M. Raghu
Samy Bengio
Oriol Vinyals
303
646
0
19 Sep 2019
Prototypical Networks for Few-shot Learning
Prototypical Networks for Few-shot Learning
Jake C. Snell
Kevin Swersky
R. Zemel
303
8,145
0
15 Mar 2017
1