Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2004.14096
Cited By
Do Neural Language Models Show Preferences for Syntactic Formalisms?
29 April 2020
Artur Kulmizev
Vinit Ravishankar
Mostafa Abdou
Joakim Nivre
MILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Do Neural Language Models Show Preferences for Syntactic Formalisms?"
16 / 16 papers shown
Title
Language Models at the Syntax-Semantics Interface: A Case Study of the Long-Distance Binding of Chinese Reflexive ziji
Xiulin Yang
40
0
0
02 Apr 2025
A Language Model's Guide Through Latent Space
Dimitri von Rutte
Sotiris Anagnostidis
Gregor Bachmann
Thomas Hofmann
45
22
0
22 Feb 2024
Syntactic Substitutability as Unsupervised Dependency Syntax
Jasper Jian
Siva Reddy
27
3
0
29 Nov 2022
Towards Unsupervised Content Disentanglement in Sentence Representations via Syntactic Roles
G. Felhi
Joseph Le Roux
Djamé Seddah
DRL
19
5
0
22 Jun 2022
Sort by Structure: Language Model Ranking as Dependency Probing
Max Müller-Eberstein
Rob van der Goot
Barbara Plank
38
3
0
10 Jun 2022
Exploiting Inductive Bias in Transformers for Unsupervised Disentanglement of Syntax and Semantics with VAEs
G. Felhi
Joseph Le Roux
Djamé Seddah
DRL
26
2
0
12 May 2022
Probing for Labeled Dependency Trees
Max Müller-Eberstein
Rob van der Goot
Barbara Plank
19
7
0
24 Mar 2022
Linguistic Frameworks Go Toe-to-Toe at Neuro-Symbolic Language Modeling
Jakob Prange
Nathan Schneider
Lingpeng Kong
22
9
0
15 Dec 2021
Examining Cross-lingual Contextual Embeddings with Orthogonal Structural Probes
Tomasz Limisiewicz
David Marevcek
11
3
0
10 Sep 2021
A Closer Look at How Fine-tuning Changes BERT
Yichu Zhou
Vivek Srikumar
26
63
0
27 Jun 2021
How is BERT surprised? Layerwise detection of linguistic anomalies
Bai Li
Zining Zhu
Guillaume Thomas
Yang Xu
Frank Rudzicz
27
31
0
16 May 2021
Probing artificial neural networks: insights from neuroscience
Anna A. Ivanova
John Hewitt
Noga Zaslavsky
6
16
0
16 Apr 2021
DirectProbe: Studying Representations without Classifiers
Yichu Zhou
Vivek Srikumar
32
27
0
13 Apr 2021
The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models
Go Inoue
Bashar Alhafni
Nurpeiis Baimukan
Houda Bouamor
Nizar Habash
35
223
0
11 Mar 2021
Probing Pretrained Language Models for Lexical Semantics
Ivan Vulić
E. Ponti
Robert Litschko
Goran Glavas
Anna Korhonen
KELM
28
232
0
12 Oct 2020
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
201
882
0
03 May 2018
1