ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.00111
  4. Cited By
Quantity doesn't buy quality syntax with neural language models

Quantity doesn't buy quality syntax with neural language models

31 August 2019
Marten van Schijndel
Aaron Mueller
Tal Linzen
ArXivPDFHTML

Papers citing "Quantity doesn't buy quality syntax with neural language models"

16 / 16 papers shown
Title
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Alex Warstadt
Aaron Mueller
Leshem Choshen
E. Wilcox
Chengxu Zhuang
...
Rafael Mosquera
Bhargavi Paranjape
Adina Williams
Tal Linzen
Ryan Cotterell
54
110
0
10 Apr 2025
Language Model Behavior: A Comprehensive Survey
Language Model Behavior: A Comprehensive Survey
Tyler A. Chang
Benjamin Bergen
VLM
LRM
LM&MA
32
103
0
20 Mar 2023
Dissociating language and thought in large language models
Dissociating language and thought in large language models
Kyle Mahowald
Anna A. Ivanova
I. Blank
Nancy Kanwisher
J. Tenenbaum
Evelina Fedorenko
ELM
ReLM
34
209
0
16 Jan 2023
What Artificial Neural Networks Can Tell Us About Human Language
  Acquisition
What Artificial Neural Networks Can Tell Us About Human Language Acquisition
Alex Warstadt
Samuel R. Bowman
27
111
0
17 Aug 2022
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive
  Bias to Sequence-to-sequence Models
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
Aaron Mueller
Robert Frank
Tal Linzen
Luheng Wang
Sebastian Schuster
AIMat
24
33
0
17 Mar 2022
The Grammar-Learning Trajectories of Neural Language Models
The Grammar-Learning Trajectories of Neural Language Models
Leshem Choshen
Guy Hacohen
D. Weinshall
Omri Abend
34
28
0
13 Sep 2021
Pre-Trained Models: Past, Present and Future
Pre-Trained Models: Past, Present and Future
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFin
MQ
AI4MH
58
818
0
14 Jun 2021
To Point or Not to Point: Understanding How Abstractive Summarizers
  Paraphrase Text
To Point or Not to Point: Understanding How Abstractive Summarizers Paraphrase Text
Matthew Wilber
William Timkey
Marten van Schijndel
24
8
0
03 Jun 2021
When Do You Need Billions of Words of Pretraining Data?
When Do You Need Billions of Words of Pretraining Data?
Yian Zhang
Alex Warstadt
Haau-Sing Li
Samuel R. Bowman
29
136
0
10 Nov 2020
Local and non-local dependency learning and emergence of rule-like
  representations in speech data by Deep Convolutional Generative Adversarial
  Networks
Local and non-local dependency learning and emergence of rule-like representations in speech data by Deep Convolutional Generative Adversarial Networks
Gašper Beguš
GAN
24
13
0
27 Sep 2020
Analysis and Evaluation of Language Models for Word Sense Disambiguation
Analysis and Evaluation of Language Models for Word Sense Disambiguation
Daniel Loureiro
Kiamehr Rezaee
Mohammad Taher Pilehvar
Jose Camacho-Collados
33
13
0
26 Aug 2020
A Systematic Assessment of Syntactic Generalization in Neural Language
  Models
A Systematic Assessment of Syntactic Generalization in Neural Language Models
Jennifer Hu
Jon Gauthier
Peng Qian
Ethan Gotlieb Wilcox
R. Levy
ELM
35
212
0
07 May 2020
Emergence of Syntax Needs Minimal Supervision
Emergence of Syntax Needs Minimal Supervision
Raphaël Bailly
Kata Gábor
36
5
0
03 May 2020
Cross-Linguistic Syntactic Evaluation of Word Prediction Models
Cross-Linguistic Syntactic Evaluation of Word Prediction Models
Aaron Mueller
Garrett Nicolai
Panayiota Petrou-Zeniou
N. Talmina
Tal Linzen
22
55
0
01 May 2020
Experience Grounds Language
Experience Grounds Language
Yonatan Bisk
Ari Holtzman
Jesse Thomason
Jacob Andreas
Yoshua Bengio
...
Angeliki Lazaridou
Jonathan May
Aleksandr Nisnevich
Nicolas Pinto
Joseph P. Turian
24
351
0
21 Apr 2020
Fixed Encoder Self-Attention Patterns in Transformer-Based Machine
  Translation
Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation
Alessandro Raganato
Yves Scherrer
Jörg Tiedemann
32
92
0
24 Feb 2020
1