Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2104.10809
Cited By
Provable Limitations of Acquiring Meaning from Ungrounded Form: What Will Future Language Models Understand?
22 April 2021
William Merrill
Yoav Goldberg
Roy Schwartz
Noah A. Smith
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Provable Limitations of Acquiring Meaning from Ungrounded Form: What Will Future Language Models Understand?"
47 / 47 papers shown
Title
Do Large Language Models know who did what to whom?
Joseph M. Denning
Xiaohan
Bryor Snefjella
Idan A. Blank
62
1
0
23 Apr 2025
BAMBI: Developing Baby Language Models for Italian
Alice Suozzi
Luca Capone
Gianluca E. Lebani
Alessandro Lenci
60
0
0
12 Mar 2025
Does GPT Really Get It? A Hierarchical Scale to Quantify Human vs AI's Understanding of Algorithms
Mirabel Reid
Santosh Vempala
ELM
35
0
0
20 Jun 2024
Natural Language Processing RELIES on Linguistics
Juri Opitz
Shira Wein
Nathan Schneider
AI4CE
55
7
0
09 May 2024
Mind's Eye of LLMs: Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models
Wenshan Wu
Shaoguang Mao
Yadong Zhang
Yan Xia
Li Dong
Lei Cui
Furu Wei
LRM
56
20
0
04 Apr 2024
ChatGPT Rates Natural Language Explanation Quality Like Humans: But on Which Scales?
Fan Huang
Haewoon Kwak
Kunwoo Park
Jisun An
ALM
ELM
AI4MH
40
12
0
26 Mar 2024
What Do Language Models Hear? Probing for Auditory Representations in Language Models
Jerry Ngo
Yoon Kim
AuLLM
MILM
26
8
0
26 Feb 2024
Can You Learn Semantics Through Next-Word Prediction? The Case of Entailment
William Merrill
Zhaofeng Wu
Norihito Naka
Yoon Kim
Tal Linzen
41
7
0
21 Feb 2024
A Mechanistic Analysis of a Transformer Trained on a Symbolic Multi-Step Reasoning Task
Jannik Brinkmann
Abhay Sheshadri
Victor Levoso
Paul Swoboda
Christian Bartelt
LRM
32
21
0
19 Feb 2024
WSC+: Enhancing The Winograd Schema Challenge Using Tree-of-Experts
Pardis Sadat Zahraei
Ali Emami
27
6
0
31 Jan 2024
More than Correlation: Do Large Language Models Learn Causal Representations of Space?
Yida Chen
Yixian Gan
Sijia Li
Li Yao
Xiaohan Zhao
LRM
30
4
0
26 Dec 2023
Grounding for Artificial Intelligence
Bing Liu
16
1
0
15 Dec 2023
In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax
Aaron Mueller
Albert Webson
Jackson Petty
Tal Linzen
ReLM
LRM
34
13
0
13 Nov 2023
Mixture-of-Linguistic-Experts Adapters for Improving and Interpreting Pre-trained Language Models
Raymond Li
Gabriel Murray
Giuseppe Carenini
MoE
41
2
0
24 Oct 2023
Meaning Representations from Trajectories in Autoregressive Models
Tian Yu Liu
Matthew Trager
Alessandro Achille
Pramuditha Perera
L. Zancato
Stefano Soatto
29
14
0
23 Oct 2023
Commonsense Knowledge Transfer for Pre-trained Language Models
Wangchunshu Zhou
Ronan Le Bras
Yejin Choi
KELM
LRM
13
4
0
04 Jun 2023
Empirical Sufficiency Lower Bounds for Language Modeling with Locally-Bootstrapped Semantic Structures
Jakob Prange
Emmanuele Chersoni
32
0
0
30 May 2023
On Degrees of Freedom in Defining and Testing Natural Language Understanding
Saku Sugawara
S. Tsugita
ELM
34
1
0
24 May 2023
Entity Tracking in Language Models
Najoung Kim
Sebastian Schuster
52
16
0
03 May 2023
KILM: Knowledge Injection into Encoder-Decoder Language Models
Yan Xu
Mahdi Namazifar
Devamanyu Hazarika
Aishwarya Padmakumar
Yang Liu
Dilek Z. Hakkani-Tür
KELM
22
26
0
17 Feb 2023
Parsel: Algorithmic Reasoning with Language Models by Composing Decompositions
E. Zelikman
Qian Huang
Gabriel Poesia
Noah D. Goodman
Nick Haber
ReLM
LRM
27
53
0
20 Dec 2022
Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task
Kenneth Li
Aspen K. Hopkins
David Bau
Fernanda Viégas
Hanspeter Pfister
Martin Wattenberg
MILM
11
263
0
24 Oct 2022
Language Models Understand Us, Poorly
Jared Moore
LRM
17
4
0
19 Oct 2022
Transparency Helps Reveal When Language Models Learn Meaning
Zhaofeng Wu
William Merrill
Hao Peng
Iz Beltagy
Noah A. Smith
19
9
0
14 Oct 2022
Entailment Semantics Can Be Extracted from an Ideal Language Model
William Merrill
Alex Warstadt
Tal Linzen
92
14
0
26 Sep 2022
T5QL: Taming language models for SQL generation
Samuel Arcadinho
David Oliveira Aparício
Hugo Veiga
António Alegria
26
6
0
21 Sep 2022
What Do NLP Researchers Believe? Results of the NLP Community Metasurvey
Julian Michael
Ari Holtzman
Alicia Parrish
Aaron Mueller
Alex Jinpeng Wang
...
Divyam Madaan
Nikita Nangia
Richard Yuanzhe Pang
Jason Phang
Sam Bowman
27
37
0
26 Aug 2022
Unit Testing for Concepts in Neural Networks
Charles Lovering
Ellie Pavlick
25
28
0
28 Jul 2022
Learning to translate by learning to communicate
C.M. Downey
Xuhui Zhou
Leo Z. Liu
Shane Steinert-Threlkeld
31
5
0
14 Jul 2022
Natural Language Specifications in Proof Assistants
Colin S. Gordon
Sergey Matskevich
41
1
0
16 May 2022
When a sentence does not introduce a discourse entity, Transformer-based models still sometimes refer to it
Sebastian Schuster
Tal Linzen
13
25
0
06 May 2022
On the Limitations of Dataset Balancing: The Lost Battle Against Spurious Correlations
Roy Schwartz
Gabriel Stanovsky
35
25
0
27 Apr 2022
A Review on Language Models as Knowledge Bases
Badr AlKhamissi
Millicent Li
Asli Celikyilmaz
Mona T. Diab
Marjan Ghazvininejad
KELM
41
175
0
12 Apr 2022
What do Toothbrushes do in the Kitchen? How Transformers Think our World is Structured
Alexander Henlein
Alexander Mehler
25
6
0
12 Apr 2022
Contrastive language and vision learning of general fashion concepts
P. Chia
Giuseppe Attanasio
Federico Bianchi
Silvia Terragni
A. Magalhães
Diogo Gonçalves
C. Greco
Jacopo Tagliabue
CLIP
21
42
0
08 Apr 2022
Synchromesh: Reliable code generation from pre-trained language models
Gabriel Poesia
Oleksandr Polozov
Vu Le
A. Tiwari
Gustavo Soares
Christopher Meek
Sumit Gulwani
20
156
0
26 Jan 2022
This Must Be the Place: Predicting Engagement of Online Communities in a Large-scale Distributed Campaign
Abraham Israeli
Alexander Kremiansky
Oren Tsur
11
3
0
14 Jan 2022
Does Pre-training Induce Systematic Inference? How Masked Language Models Acquire Commonsense Knowledge
Ian Porada
Alessandro Sordoni
Jackie C.K. Cheung
29
5
0
16 Dec 2021
Trees in transformers: a theoretical analysis of the Transformer's ability to represent trees
Qi He
João Sedoc
J. Rodu
11
1
0
16 Dec 2021
Linguistic Frameworks Go Toe-to-Toe at Neuro-Symbolic Language Modeling
Jakob Prange
Nathan Schneider
Lingpeng Kong
22
9
0
15 Dec 2021
The Dangers of Underclaiming: Reasons for Caution When Reporting How NLP Systems Fail
Sam Bowman
OffRL
24
45
0
15 Oct 2021
Symbolic Knowledge Distillation: from General Language Models to Commonsense Models
Peter West
Chandrasekhar Bhagavatula
Jack Hessel
Jena D. Hwang
Liwei Jiang
Ronan Le Bras
Ximing Lu
Sean Welleck
Yejin Choi
SyDa
54
320
0
14 Oct 2021
Does Vision-and-Language Pretraining Improve Lexical Grounding?
Tian Yun
Chen Sun
Ellie Pavlick
VLM
CoGe
40
30
0
21 Sep 2021
Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color
Mostafa Abdou
Artur Kulmizev
Daniel Hershcovich
Stella Frank
Ellie Pavlick
Anders Søgaard
19
114
0
13 Sep 2021
LMMS Reloaded: Transformer-based Sense Embeddings for Disambiguation and Beyond
Daniel Loureiro
A. Jorge
Jose Camacho-Collados
33
26
0
26 May 2021
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
201
882
0
03 May 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1