ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.04127
  4. Cited By
Finding Syntax in Human Encephalography with Beam Search

Finding Syntax in Human Encephalography with Beam Search

11 June 2018
John T. Hale
Chris Dyer
A. Kuncoro
Jonathan Brennan
ArXivPDFHTML

Papers citing "Finding Syntax in Human Encephalography with Beam Search"

50 / 53 papers shown
Title
Large Language Models Are Human-Like Internally
Large Language Models Are Human-Like Internally
Tatsuki Kuribayashi
Yohei Oseki
Souhaib Ben Taieb
Kentaro Inui
Timothy Baldwin
71
4
0
03 Feb 2025
Deep Neural Networks and Brain Alignment: Brain Encoding and Decoding (Survey)
Deep Neural Networks and Brain Alignment: Brain Encoding and Decoding (Survey)
S. Oota
Zijiao Chen
Manish Gupta
R. Bapi
G. Jobard
F. Alexandre
X. Hinaut
3DV
AI4CE
49
11
0
31 Dec 2024
Sneaking Syntax into Transformer Language Models with Tree Regularization
Sneaking Syntax into Transformer Language Models with Tree Regularization
Ananjan Nandi
Christopher D. Manning
Shikhar Murty
74
0
0
28 Nov 2024
Dependency Transformer Grammars: Integrating Dependency Structures into
  Transformer Language Models
Dependency Transformer Grammars: Integrating Dependency Structures into Transformer Language Models
Yida Zhao
Chao Lou
Kewei Tu
53
0
0
24 Jul 2024
Investigating the Timescales of Language Processing with EEG and
  Language Models
Investigating the Timescales of Language Processing with EEG and Language Models
Davide Turco
Conor J. Houghton
31
0
0
28 Jun 2024
Decoding Probing: Revealing Internal Linguistic Structures in Neural
  Language Models using Minimal Pairs
Decoding Probing: Revealing Internal Linguistic Structures in Neural Language Models using Minimal Pairs
Linyang He
Peili Chen
Ercong Nie
Yuanning Li
Jonathan R. Brennan
18
5
0
26 Mar 2024
Computational Models to Study Language Processing in the Human Brain: A
  Survey
Computational Models to Study Language Processing in the Human Brain: A Survey
Shaonan Wang
Jingyuan Sun
Yunhao Zhang
Nan Lin
Marie-Francine Moens
Chengqing Zong
34
5
0
20 Mar 2024
Emergent Word Order Universals from Cognitively-Motivated Language
  Models
Emergent Word Order Universals from Cognitively-Motivated Language Models
Tatsuki Kuribayashi
Ryo Ueda
Ryosuke Yoshida
Yohei Oseki
Ted Briscoe
Timothy Baldwin
38
2
0
19 Feb 2024
Frequency Explains the Inverse Correlation of Large Language Models'
  Size, Training Data Amount, and Surprisal's Fit to Reading Times
Frequency Explains the Inverse Correlation of Large Language Models' Size, Training Data Amount, and Surprisal's Fit to Reading Times
Byung-Doh Oh
Shisen Yue
William Schuler
53
14
0
03 Feb 2024
Multipath parsing in the brain
Multipath parsing in the brain
Berta Franzluebbers
Donald Dunagan
Milovs Stanojević
Jan Buys
John T. Hale
16
0
0
31 Jan 2024
Divergences between Language Models and Human Brains
Divergences between Language Models and Human Brains
Yuchen Zhou
Emmy Liu
Graham Neubig
Michael J. Tarr
Leila Wehbe
32
1
0
15 Nov 2023
Psychometric Predictive Power of Large Language Models
Psychometric Predictive Power of Large Language Models
Tatsuki Kuribayashi
Yohei Oseki
Timothy Baldwin
LM&MA
26
3
0
13 Nov 2023
Empirical Sufficiency Lower Bounds for Language Modeling with
  Locally-Bootstrapped Semantic Structures
Empirical Sufficiency Lower Bounds for Language Modeling with Locally-Bootstrapped Semantic Structures
Jakob Prange
Emmanuele Chersoni
32
0
0
30 May 2023
Coarse-to-Fine Contrastive Learning in Image-Text-Graph Space for
  Improved Vision-Language Compositionality
Coarse-to-Fine Contrastive Learning in Image-Text-Graph Space for Improved Vision-Language Compositionality
Harman Singh
Pengchuan Zhang
Qifan Wang
Mengjiao MJ Wang
Wenhan Xiong
Jingfei Du
Yu Chen
CoGe
VLM
29
24
0
23 May 2023
Transformer-Based Language Model Surprisal Predicts Human Reading Times
  Best with About Two Billion Training Tokens
Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens
Byung-Doh Oh
William Schuler
40
25
0
22 Apr 2023
Coupling Artificial Neurons in BERT and Biological Neurons in the Human
  Brain
Coupling Artificial Neurons in BERT and Biological Neurons in the Human Brain
Xu Liu
Mengyue Zhou
Gaosheng Shi
Yu Du
Lin Zhao
Zihao Wu
David Liu
Tianming Liu
Xintao Hu
36
10
0
27 Mar 2023
Why Does Surprisal From Larger Transformer-Based Language Models Provide
  a Poorer Fit to Human Reading Times?
Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?
Byung-Doh Oh
William Schuler
24
101
0
23 Dec 2022
Probing for Incremental Parse States in Autoregressive Language Models
Probing for Incremental Parse States in Autoregressive Language Models
Tiwalayo Eisape
Vineet Gangireddy
R. Levy
Yoon Kim
25
11
0
17 Nov 2022
Characterizing Intrinsic Compositionality in Transformers with Tree
  Projections
Characterizing Intrinsic Compositionality in Transformers with Tree Projections
Shikhar Murty
Pratyusha Sharma
Jacob Andreas
Christopher D. Manning
14
39
0
02 Nov 2022
Modeling structure-building in the brain with CCG parsing and large
  language models
Modeling structure-building in the brain with CCG parsing and large language models
Miloš Stanojević
Jonathan Brennan
Donald Dunagan
Mark Steedman
John T. Hale
19
12
0
28 Oct 2022
Composition, Attention, or Both?
Composition, Attention, or Both?
Ryosuke Yoshida
Yohei Oseki
CoGe
29
0
0
24 Oct 2022
Context Limitations Make Neural Language Models More Human-Like
Context Limitations Make Neural Language Models More Human-Like
Tatsuki Kuribayashi
Yohei Oseki
Ana Brassard
Kentaro Inui
44
29
0
23 May 2022
Multilingual Syntax-aware Language Modeling through Dependency Tree
  Conversion
Multilingual Syntax-aware Language Modeling through Dependency Tree Conversion
Shun Kando
Hiroshi Noji
Yusuke Miyao
13
0
0
19 Apr 2022
Connecting Neural Response measurements & Computational Models of
  language: a non-comprehensive guide
Connecting Neural Response measurements & Computational Models of language: a non-comprehensive guide
Mostafa Abdou
21
7
0
10 Mar 2022
Transformer Grammars: Augmenting Transformer Language Models with
  Syntactic Inductive Biases at Scale
Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale
Laurent Sartran
Samuel Barrett
A. Kuncoro
Milovs Stanojević
Phil Blunsom
Chris Dyer
47
49
0
01 Mar 2022
Open Vocabulary Electroencephalography-To-Text Decoding and Zero-shot
  Sentiment Classification
Open Vocabulary Electroencephalography-To-Text Decoding and Zero-shot Sentiment Classification
Zhenhailong Wang
Heng Ji
84
71
0
05 Dec 2021
Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese
  Language Models
Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese Language Models
Yiwen Wang
Jennifer Hu
R. Levy
Peng Qian
9
3
0
22 Sep 2021
Modeling Human Sentence Processing with Left-Corner Recurrent Neural
  Network Grammars
Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars
Ryo Yoshida
Hiroshi Noji
Yohei Oseki
34
8
0
10 Sep 2021
A Biologically Plausible Parser
A Biologically Plausible Parser
Daniel Mitropolsky
Mike Collins
Christos H. Papadimitriou
15
12
0
04 Aug 2021
Lower Perplexity is Not Always Human-Like
Lower Perplexity is Not Always Human-Like
Tatsuki Kuribayashi
Yohei Oseki
Takumi Ito
Ryo Yoshida
Masayuki Asahara
Kentaro Inui
13
68
0
02 Jun 2021
Effective Batching for Recurrent Neural Network Grammars
Effective Batching for Recurrent Neural Network Grammars
Hiroshi Noji
Yohei Oseki
GNN
15
16
0
31 May 2021
Retrieving Event-related Human Brain Dynamics from Natural Sentence
  Reading
Retrieving Event-related Human Brain Dynamics from Natural Sentence Reading
Xinping Liu
Zehong Cao
10
0
0
29 Mar 2021
Disentangling Syntax and Semantics in the Brain with Deep Networks
Disentangling Syntax and Semantics in the Brain with Deep Networks
Charlotte Caucheteux
Alexandre Gramfort
J. King
36
69
0
02 Mar 2021
Decoding EEG Brain Activity for Multi-Modal Natural Language Processing
Decoding EEG Brain Activity for Multi-Modal Natural Language Processing
Nora Hollenstein
Cédric Renggli
B. Glaus
Maria Barrett
M. Troendle
N. Langer
Ce Zhang
63
35
0
17 Feb 2021
From Language to Language-ish: How Brain-Like is an LSTM's
  Representation of Nonsensical Language Stimuli?
From Language to Language-ish: How Brain-Like is an LSTM's Representation of Nonsensical Language Stimuli?
Maryam Hashemzadeh
Greta Kaufeld
Martha White
Andrea E. Martin
Alona Fyshe
MILM
8
6
0
14 Oct 2020
Exploiting Syntactic Structure for Better Language Modeling: A Syntactic
  Distance Approach
Exploiting Syntactic Structure for Better Language Modeling: A Syntactic Distance Approach
Wenyu Du
Zhouhan Lin
Yikang Shen
Timothy J. O'Donnell
Yoshua Bengio
Yue Zhang
30
13
0
12 May 2020
Syntactic Structure from Deep Learning
Syntactic Structure from Deep Learning
Tal Linzen
Marco Baroni
NAI
20
178
0
22 Apr 2020
The Fluidity of Concept Representations in Human Brain Signals
The Fluidity of Concept Representations in Human Brain Signals
E. Hendrikx
Lisa Beinborn
8
0
0
20 Feb 2020
ZuCo 2.0: A Dataset of Physiological Recordings During Natural Reading
  and Annotation
ZuCo 2.0: A Dataset of Physiological Recordings During Natural Reading and Annotation
Nora Hollenstein
M. Troendle
Ce Zhang
N. Langer
25
83
0
02 Dec 2019
Representation of Constituents in Neural Language Models: Coordination
  Phrase as a Case Study
Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study
A. An
Peng Qian
Ethan Gotlieb Wilcox
R. Levy
6
14
0
10 Sep 2019
Cross-Domain Generalization of Neural Constituency Parsers
Cross-Domain Generalization of Neural Constituency Parsers
Daniel Fried
Nikita Kitaev
Dan Klein
NAI
AI4CE
23
35
0
09 Jul 2019
Relating Simple Sentence Representations in Deep Neural Networks and the
  Brain
Relating Simple Sentence Representations in Deep Neural Networks and the Brain
Sharmistha Jat
Hao Tang
Partha P. Talukdar
Tom Michael Mitchell
19
21
0
27 Jun 2019
Scalable Syntax-Aware Language Models Using Knowledge Distillation
Scalable Syntax-Aware Language Models Using Knowledge Distillation
A. Kuncoro
Chris Dyer
Laura Rimell
S. Clark
Phil Blunsom
35
26
0
14 Jun 2019
Interpreting and improving natural-language processing (in machines)
  with natural language-processing (in the brain)
Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)
Mariya Toneva
Leila Wehbe
MILM
AI4CE
36
219
0
28 May 2019
Learning to Collocate Neural Modules for Image Captioning
Learning to Collocate Neural Modules for Image Captioning
Xu Yang
Hanwang Zhang
Jianfei Cai
25
77
0
18 Apr 2019
Unsupervised Recurrent Neural Network Grammars
Unsupervised Recurrent Neural Network Grammars
Yoon Kim
Alexander M. Rush
Lei Yu
A. Kuncoro
Chris Dyer
Gábor Melis
LRM
RALM
SSL
29
115
0
07 Apr 2019
Advancing NLP with Cognitive Language Processing Signals
Advancing NLP with Cognitive Language Processing Signals
Nora Hollenstein
Maria Barrett
M. Troendle
Francesco Bigiolli
N. Langer
Ce Zhang
15
39
0
04 Apr 2019
Robust Evaluation of Language-Brain Encoding Experiments
Robust Evaluation of Language-Brain Encoding Experiments
Lisa Beinborn
Samira Abnar
Rochelle Choenni
13
15
0
04 Apr 2019
Understanding language-elicited EEG data by predicting it from a
  fine-tuned language model
Understanding language-elicited EEG data by predicting it from a fine-tuned language model
Dan Schwartz
Tom Michael Mitchell
24
20
0
02 Apr 2019
Neural Language Models as Psycholinguistic Subjects: Representations of
  Syntactic State
Neural Language Models as Psycholinguistic Subjects: Representations of Syntactic State
Richard Futrell
Ethan Gotlieb Wilcox
Takashi Morita
Peng Qian
Miguel Ballesteros
R. Levy
MILM
27
190
0
08 Mar 2019
12
Next