ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.08694
  4. Cited By
On the proper role of linguistically-oriented deep net analysis in
  linguistic theorizing

On the proper role of linguistically-oriented deep net analysis in linguistic theorizing

16 June 2021
Marco Baroni
ArXivPDFHTML

Papers citing "On the proper role of linguistically-oriented deep net analysis in linguistic theorizing"

16 / 16 papers shown
Title
Towards Developmentally Plausible Rewards: Communicative Success as a Learning Signal for Interactive Language Models
Towards Developmentally Plausible Rewards: Communicative Success as a Learning Signal for Interactive Language Models
Lennart Stöpler
Rufat Asadli
Mitja Nikolaus
Ryan Cotterell
Alex Warstadt
LRM
37
0
0
09 May 2025
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Alex Warstadt
Aaron Mueller
Leshem Choshen
E. Wilcox
Chengxu Zhuang
...
Rafael Mosquera
Bhargavi Paranjape
Adina Williams
Tal Linzen
Ryan Cotterell
40
108
0
10 Apr 2025
Language Models Largely Exhibit Human-like Constituent Ordering Preferences
Language Models Largely Exhibit Human-like Constituent Ordering Preferences
Ada Defne Tur
Gaurav Kamath
Siva Reddy
61
0
0
08 Feb 2025
From Frege to chatGPT: Compositionality in language, cognition, and deep
  neural networks
From Frege to chatGPT: Compositionality in language, cognition, and deep neural networks
Jacob Russin
Sam Whitman McGrath
Danielle J. Williams
Lotem Elber-Dorozko
AI4CE
75
3
0
24 May 2024
Babysit A Language Model From Scratch: Interactive Language Learning by Trials and Demonstrations
Babysit A Language Model From Scratch: Interactive Language Learning by Trials and Demonstrations
Ziqiao Ma
Zekun Wang
Joyce Chai
60
2
0
22 May 2024
Language in Vivo vs. in Silico: Size Matters but Larger Language Models
  Still Do Not Comprehend Language on a Par with Humans
Language in Vivo vs. in Silico: Size Matters but Larger Language Models Still Do Not Comprehend Language on a Par with Humans
Vittoria Dentella
Fritz Guenther
Evelina Leivada
ELM
49
1
0
23 Apr 2024
A Method for Studying Semantic Construal in Grammatical Constructions
  with Interpretable Contextual Embedding Spaces
A Method for Studying Semantic Construal in Grammatical Constructions with Interpretable Contextual Embedding Spaces
Gabriella Chronis
Kyle Mahowald
K. Erk
18
8
0
29 May 2023
A Discerning Several Thousand Judgments: GPT-3 Rates the Article +
  Adjective + Numeral + Noun Construction
A Discerning Several Thousand Judgments: GPT-3 Rates the Article + Adjective + Numeral + Noun Construction
Kyle Mahowald
22
24
0
29 Jan 2023
Dissociating language and thought in large language models
Dissociating language and thought in large language models
Kyle Mahowald
Anna A. Ivanova
I. Blank
Nancy Kanwisher
J. Tenenbaum
Evelina Fedorenko
ELM
ReLM
29
209
0
16 Jan 2023
Cross-Linguistic Syntactic Difference in Multilingual BERT: How Good is
  It and How Does It Affect Transfer?
Cross-Linguistic Syntactic Difference in Multilingual BERT: How Good is It and How Does It Affect Transfer?
Ningyu Xu
Tao Gui
Ruotian Ma
Qi Zhang
Jingting Ye
Menghan Zhang
Xuanjing Huang
38
13
0
21 Dec 2022
State-of-the-art generalisation research in NLP: A taxonomy and review
State-of-the-art generalisation research in NLP: A taxonomy and review
Dieuwke Hupkes
Mario Giulianelli
Verna Dankers
Mikel Artetxe
Yanai Elazar
...
Leila Khalatbari
Maria Ryskina
Rita Frieske
Ryan Cotterell
Zhijing Jin
121
94
0
06 Oct 2022
Structural Persistence in Language Models: Priming as a Window into
  Abstract Language Representations
Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations
Arabella J. Sinclair
Jaap Jumelet
Willem H. Zuidema
Raquel Fernández
61
38
0
30 Sep 2021
Transformers in the loop: Polarity in neural models of language
Transformers in the loop: Polarity in neural models of language
Lisa Bylinina
Alexey Tikhonov
38
0
0
08 Sep 2021
How Does Adversarial Fine-Tuning Benefit BERT?
How Does Adversarial Fine-Tuning Benefit BERT?
J. Ebrahimi
Hao Yang
Wei Zhang
AAML
26
4
0
31 Aug 2021
What you can cram into a single vector: Probing sentence embeddings for
  linguistic properties
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
201
882
0
03 May 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,984
0
20 Apr 2018
1