ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.06400
  4. Cited By
Studying the Inductive Biases of RNNs with Synthetic Variations of
  Natural Languages

Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages

15 March 2019
Shauli Ravfogel
Yoav Goldberg
Tal Linzen
ArXivPDFHTML

Papers citing "Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages"

12 / 12 papers shown
Title
Towards Bridging the Digital Language Divide
Towards Bridging the Digital Language Divide
Gábor Bella
Paula Helm
Gertraud Koch
Fausto Giunchiglia
20
2
0
25 Jul 2023
Does Character-level Information Always Improve DRS-based Semantic
  Parsing?
Does Character-level Information Always Improve DRS-based Semantic Parsing?
Tomoya Kurosawa
Hitomi Yanaka
21
0
0
04 Jun 2023
Dissociating language and thought in large language models
Dissociating language and thought in large language models
Kyle Mahowald
Anna A. Ivanova
I. Blank
Nancy Kanwisher
J. Tenenbaum
Evelina Fedorenko
ELM
ReLM
29
209
0
16 Jan 2023
Linear Connectivity Reveals Generalization Strategies
Linear Connectivity Reveals Generalization Strategies
Jeevesh Juneja
Rachit Bansal
Kyunghyun Cho
João Sedoc
Naomi Saphra
237
45
0
24 May 2022
Pretraining with Artificial Language: Studying Transferable Knowledge in
  Language Models
Pretraining with Artificial Language: Studying Transferable Knowledge in Language Models
Ryokan Ri
Yoshimasa Tsuruoka
32
25
0
19 Mar 2022
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive
  Bias to Sequence-to-sequence Models
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
Aaron Mueller
Robert Frank
Tal Linzen
Luheng Wang
Sebastian Schuster
AIMat
19
33
0
17 Mar 2022
How much do language models copy from their training data? Evaluating
  linguistic novelty in text generation using RAVEN
How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
R. Thomas McCoy
P. Smolensky
Tal Linzen
Jianfeng Gao
Asli Celikyilmaz
SyDa
25
119
0
18 Nov 2021
CoBERL: Contrastive BERT for Reinforcement Learning
CoBERL: Contrastive BERT for Reinforcement Learning
Andrea Banino
Adria Puidomenech Badia
Jacob Walker
Tim Scholtes
Jovana Mitrović
Charles Blundell
OffRL
30
36
0
12 Jul 2021
Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERT
Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERT
Isabel Papadimitriou
Ethan A. Chi
Richard Futrell
Kyle Mahowald
19
44
0
26 Jan 2021
Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with
  Multi-lingual Language Representation Model
Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model
Tsung-Yuan Hsu
Chi-Liang Liu
Hung-yi Lee
18
60
0
15 Sep 2019
What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb
  Constructions?
What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb Constructions?
Miryam de Lhoneux
Sara Stymne
Joakim Nivre
12
3
0
18 Jul 2019
What you can cram into a single vector: Probing sentence embeddings for
  linguistic properties
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
201
882
0
03 May 2018
1