ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.06627
  4. Cited By
Dissociating language and thought in large language models

Dissociating language and thought in large language models

16 January 2023
Kyle Mahowald
Anna A. Ivanova
I. Blank
Nancy Kanwisher
J. Tenenbaum
Evelina Fedorenko
    ELM
    ReLM
ArXivPDFHTML

Papers citing "Dissociating language and thought in large language models"

50 / 128 papers shown
Title
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language
  Models
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models
Matthew Finlayson
Aaron Mueller
Sebastian Gehrmann
Stuart M. Shieber
Tal Linzen
Yonatan Belinkov
100
110
0
10 Jun 2021
A Targeted Assessment of Incremental Processing in Neural LanguageModels
  and Humans
A Targeted Assessment of Incremental Processing in Neural LanguageModels and Humans
Ethan Gotlieb Wilcox
P. Vani
R. Levy
62
36
0
06 Jun 2021
Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Rowan Hall Maudslay
Ryan Cotterell
42
33
0
04 Jun 2021
Visual analogy: Deep learning versus compositional models
Visual analogy: Deep learning versus compositional models
Nicholas Ichien
Qing Liu
Shuhao Fu
K. Holyoak
Alan Yuille
Hongjing Lu
CoGe
42
21
0
14 May 2021
Counterfactual Interventions Reveal the Causal Effect of Relative Clause
  Representations on Agreement Prediction
Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction
Shauli Ravfogel
Grusha Prasad
Tal Linzen
Yoav Goldberg
50
59
0
14 May 2021
Probing artificial neural networks: insights from neuroscience
Probing artificial neural networks: insights from neuroscience
Anna A. Ivanova
John Hewitt
Noga Zaslavsky
38
16
0
16 Apr 2021
Disentangling Semantics and Syntax in Sentence Embeddings with
  Pre-trained Language Models
Disentangling Semantics and Syntax in Sentence Embeddings with Pre-trained Language Models
James Y. Huang
Kuan-Hao Huang
Kai-Wei Chang
65
21
0
11 Apr 2021
Are NLP Models really able to Solve Simple Math Word Problems?
Are NLP Models really able to Solve Simple Math Word Problems?
Arkil Patel
S. Bhattamishra
Navin Goyal
ReLM
LRM
78
825
0
12 Mar 2021
Coordination Among Neural Modules Through a Shared Global Workspace
Coordination Among Neural Modules Through a Shared Global Workspace
Anirudh Goyal
Aniket Didolkar
Alex Lamb
Kartikeya Badola
Nan Rosemary Ke
Nasim Rahaman
Jonathan Binas
Charles Blundell
Michael C. Mozer
Yoshua Bengio
192
98
0
01 Mar 2021
Probing Classifiers: Promises, Shortcomings, and Advances
Probing Classifiers: Promises, Shortcomings, and Advances
Yonatan Belinkov
256
440
0
24 Feb 2021
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
314
366
0
01 Feb 2021
When Do You Need Billions of Words of Pretraining Data?
When Do You Need Billions of Words of Pretraining Data?
Yian Zhang
Alex Warstadt
Haau-Sing Li
Samuel R. Bowman
51
141
0
10 Nov 2020
Analyzing Individual Neurons in Pre-trained Language Models
Analyzing Individual Neurons in Pre-trained Language Models
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
Yonatan Belinkov
MILM
55
104
0
06 Oct 2020
Do Transformers Need Deep Long-Range Memory
Do Transformers Need Deep Long-Range Memory
Jack W. Rae
Ali Razavi
RALM
48
40
0
07 Jul 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
596
41,736
0
28 May 2020
A Systematic Assessment of Syntactic Generalization in Neural Language
  Models
A Systematic Assessment of Syntactic Generalization in Neural Language Models
Jennifer Hu
Jon Gauthier
Peng Qian
Ethan Gotlieb Wilcox
R. Levy
ELM
69
220
0
07 May 2020
Influence Paths for Characterizing Subject-Verb Number Agreement in LSTM
  Language Models
Influence Paths for Characterizing Subject-Verb Number Agreement in LSTM Language Models
Kaiji Lu
Piotr (Peter) Mardziel
Klas Leino
Matt Fredrikson
Anupam Datta
52
10
0
03 May 2020
Extending Multilingual BERT to Low-Resource Languages
Extending Multilingual BERT to Low-Resource Languages
Zihan Wang
Karthikeyan K
Stephen D. Mayhew
Dan Roth
VLM
56
129
0
28 Apr 2020
Syntactic Structure from Deep Learning
Syntactic Structure from Deep Learning
Tal Linzen
Marco Baroni
NAI
52
185
0
22 Apr 2020
Longformer: The Long-Document Transformer
Longformer: The Long-Document Transformer
Iz Beltagy
Matthew E. Peters
Arman Cohan
RALM
VLM
128
4,048
0
10 Apr 2020
Information-Theoretic Probing with Minimum Description Length
Information-Theoretic Probing with Minimum Description Length
Elena Voita
Ivan Titov
65
274
0
27 Mar 2020
The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence
The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence
G. Marcus
VLM
63
363
0
14 Feb 2020
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
522
4,773
0
23 Jan 2020
Does syntax need to grow on trees? Sources of hierarchical inductive
  bias in sequence-to-sequence networks
Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks
R. Thomas McCoy
Robert Frank
Tal Linzen
71
108
0
10 Jan 2020
oLMpics -- On what Language Model Pre-training Captures
oLMpics -- On what Language Model Pre-training Captures
Alon Talmor
Yanai Elazar
Yoav Goldberg
Jonathan Berant
LRM
96
303
0
31 Dec 2019
BLiMP: The Benchmark of Linguistic Minimal Pairs for English
BLiMP: The Benchmark of Linguistic Minimal Pairs for English
Alex Warstadt
Alicia Parrish
Haokun Liu
Anhad Mohananey
Wei Peng
Sheng-Fu Wang
Samuel R. Bowman
72
491
0
02 Dec 2019
CamemBERT: a Tasty French Language Model
CamemBERT: a Tasty French Language Model
Louis Martin
Benjamin Muller
Pedro Ortiz Suarez
Yoann Dupont
Laurent Romary
Eric Villemonte de la Clergerie
Djamé Seddah
Benoît Sagot
96
970
0
10 Nov 2019
Negated and Misprimed Probes for Pretrained Language Models: Birds Can
  Talk, But Cannot Fly
Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly
Nora Kassner
Hinrich Schütze
68
321
0
08 Nov 2019
Designing and Interpreting Probes with Control Tasks
Designing and Interpreting Probes with Control Tasks
John Hewitt
Percy Liang
58
536
0
08 Sep 2019
Investigating BERT's Knowledge of Language: Five Analysis Methods with
  NPIs
Investigating BERT's Knowledge of Language: Five Analysis Methods with NPIs
Alex Warstadt
Yuning Cao
Ioana Grosu
Wei Peng
Hagen Blix
...
Jason Phang
Anhad Mohananey
Phu Mon Htut
Paloma Jeretic
Samuel R. Bowman
46
123
0
05 Sep 2019
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
558
2,660
0
03 Sep 2019
Cosmos QA: Machine Reading Comprehension with Contextual Commonsense
  Reasoning
Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning
Lifu Huang
Ronan Le Bras
Chandra Bhagavatula
Yejin Choi
AIMat
RALM
LRM
99
453
0
31 Aug 2019
Quantity doesn't buy quality syntax with neural language models
Quantity doesn't buy quality syntax with neural language models
Marten van Schijndel
Aaron Mueller
Tal Linzen
59
68
0
31 Aug 2019
How Relevant is the Turing Test in the Age of Sophisbots?
How Relevant is the Turing Test in the Age of Sophisbots?
Dan Boneh
Andrew J. Grotto
Patrick McDaniel
Nicolas Papernot
26
32
0
30 Aug 2019
Compositionality decomposed: how do neural networks generalise?
Compositionality decomposed: how do neural networks generalise?
Dieuwke Hupkes
Verna Dankers
Mathijs Mul
Elia Bruni
CoGe
118
332
0
22 Aug 2019
Universal Adversarial Triggers for Attacking and Analyzing NLP
Universal Adversarial Triggers for Attacking and Analyzing NLP
Eric Wallace
Shi Feng
Nikhil Kandpal
Matt Gardner
Sameer Singh
AAML
SILM
109
865
0
20 Aug 2019
What BERT is not: Lessons from a new suite of psycholinguistic
  diagnostics for language models
What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models
Allyson Ettinger
79
603
0
31 Jul 2019
Learning by Abstraction: The Neural State Machine
Learning by Abstraction: The Neural State Machine
Drew A. Hudson
Christopher D. Manning
NAI
OCL
61
260
0
09 Jul 2019
Learning as the Unsupervised Alignment of Conceptual Systems
Learning as the Unsupervised Alignment of Conceptual Systems
Brett D. Roads
Bradley C. Love
OCL
45
46
0
21 Jun 2019
Measuring Bias in Contextualized Word Representations
Measuring Bias in Contextualized Word Representations
Keita Kurita
Nidhi Vyas
Ayush Pareek
A. Black
Yulia Tsvetkov
98
449
0
18 Jun 2019
What Kind of Language Is Hard to Language-Model?
What Kind of Language Is Hard to Language-Model?
Sabrina J. Mielke
Ryan Cotterell
Kyle Gorman
Brian Roark
Jason Eisner
66
78
0
11 Jun 2019
Analyzing the Structure of Attention in a Transformer Language Model
Analyzing the Structure of Attention in a Transformer Language Model
Jesse Vig
Yonatan Belinkov
64
365
0
07 Jun 2019
How multilingual is Multilingual BERT?
How multilingual is Multilingual BERT?
Telmo Pires
Eva Schlinger
Dan Garrette
LRM
VLM
143
1,401
0
04 Jun 2019
Episodic Memory in Lifelong Language Learning
Episodic Memory in Lifelong Language Learning
Cyprien de Masson dÁutume
Sebastian Ruder
Lingpeng Kong
Dani Yogatama
CLL
KELM
126
285
0
03 Jun 2019
HellaSwag: Can a Machine Really Finish Your Sentence?
HellaSwag: Can a Machine Really Finish Your Sentence?
Rowan Zellers
Ari Holtzman
Yonatan Bisk
Ali Farhadi
Yejin Choi
145
2,446
0
19 May 2019
BERT Rediscovers the Classical NLP Pipeline
BERT Rediscovers the Classical NLP Pipeline
Ian Tenney
Dipanjan Das
Ellie Pavlick
MILM
SSeg
126
1,469
0
15 May 2019
SuperGLUE: A Stickier Benchmark for General-Purpose Language
  Understanding Systems
SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
Alex Jinpeng Wang
Yada Pruksachatkun
Nikita Nangia
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
228
2,305
0
02 May 2019
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and
  Sentences From Natural Supervision
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision
Jiayuan Mao
Chuang Gan
Pushmeet Kohli
J. Tenenbaum
Jiajun Wu
NAI
115
696
0
26 Apr 2019
The emergence of number and syntax units in LSTM language models
The emergence of number and syntax units in LSTM language models
Yair Lakretz
Germán Kruszewski
T. Desbordes
Dieuwke Hupkes
S. Dehaene
Marco Baroni
44
170
0
18 Mar 2019
Studying the Inductive Biases of RNNs with Synthetic Variations of
  Natural Languages
Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages
Shauli Ravfogel
Yoav Goldberg
Tal Linzen
63
70
0
15 Mar 2019
Previous
123
Next