ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.13528
  4. Cited By
What BERT is not: Lessons from a new suite of psycholinguistic
  diagnostics for language models

What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models

31 July 2019
Allyson Ettinger
ArXivPDFHTML

Papers citing "What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models"

50 / 119 papers shown
Title
Representing Affect Information in Word Embeddings
Representing Affect Information in Word Embeddings
Yuhan Zhang
Wenqi Chen
Ruihan Zhang
Xiajie Zhang
CVBM
57
3
0
21 Sep 2022
Subject Verb Agreement Error Patterns in Meaningless Sentences: Humans
  vs. BERT
Subject Verb Agreement Error Patterns in Meaningless Sentences: Humans vs. BERT
Karim Lasri
Olga Seminck
Alessandro Lenci
Thierry Poibeau
29
4
0
21 Sep 2022
Testing Pre-trained Language Models' Understanding of Distributivity via
  Causal Mediation Analysis
Testing Pre-trained Language Models' Understanding of Distributivity via Causal Mediation Analysis
Pangbo Ban
Yifan Jiang
Tianran Liu
Shane Steinert-Threlkeld
53
4
0
11 Sep 2022
Cognitive Modeling of Semantic Fluency Using Transformers
Cognitive Modeling of Semantic Fluency Using Transformers
Animesh Nighojkar
Anna Khlyzova
John Licato
21
2
0
20 Aug 2022
Using cognitive psychology to understand GPT-3
Using cognitive psychology to understand GPT-3
Marcel Binz
Eric Schulz
ELM
LLMAG
250
440
0
21 Jun 2022
The Fallacy of AI Functionality
The Fallacy of AI Functionality
Inioluwa Deborah Raji
Indra Elizabeth Kumar
Aaron Horowitz
Andrew D. Selbst
34
179
0
20 Jun 2022
The Curious Case of Control
The Curious Case of Control
Elias Stengel-Eskin
Benjamin Van Durme
27
0
0
24 May 2022
Life after BERT: What do Other Muppets Understand about Language?
Life after BERT: What do Other Muppets Understand about Language?
Vladislav Lialin
Kevin Zhao
Namrata Shivagunde
Anna Rumshisky
47
6
0
21 May 2022
Natural Language Specifications in Proof Assistants
Natural Language Specifications in Proof Assistants
Colin S. Gordon
Sergey Matskevich
41
1
0
16 May 2022
Improving Contextual Representation with Gloss Regularized Pre-training
Improving Contextual Representation with Gloss Regularized Pre-training
Yu Lin
Zhecheng An
Peihao Wu
Zejun Ma
24
5
0
13 May 2022
Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text
  Correspondence
Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text Correspondence
Myeongjun Jang
Frank Mtumbuka
Thomas Lukasiewicz
36
9
0
08 May 2022
When a sentence does not introduce a discourse entity, Transformer-based
  models still sometimes refer to it
When a sentence does not introduce a discourse entity, Transformer-based models still sometimes refer to it
Sebastian Schuster
Tal Linzen
13
25
0
06 May 2022
Developmental Negation Processing in Transformer Language Models
Developmental Negation Processing in Transformer Language Models
Antonio Laverghetta
John Licato
LRM
33
5
0
29 Apr 2022
Probing Simile Knowledge from Pre-trained Language Models
Probing Simile Knowledge from Pre-trained Language Models
Weijie Chen
Yongzhu Chang
Rongsheng Zhang
Jiashu Pu
Guandan Chen
Le Zhang
Yadong Xi
Yijiang Chen
Chang Su
24
11
0
27 Apr 2022
Generalized Quantifiers as a Source of Error in Multilingual NLU
  Benchmarks
Generalized Quantifiers as a Source of Error in Multilingual NLU Benchmarks
Ruixiang Cui
Daniel Hershcovich
Anders Søgaard
25
13
0
22 Apr 2022
minicons: Enabling Flexible Behavioral and Representational Analyses of
  Transformer Language Models
minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models
Kanishka Misra
19
58
0
24 Mar 2022
How does the pre-training objective affect what large language models
  learn about linguistic properties?
How does the pre-training objective affect what large language models learn about linguistic properties?
Ahmed Alajrami
Nikolaos Aletras
29
20
0
20 Mar 2022
Geographic Adaptation of Pretrained Language Models
Geographic Adaptation of Pretrained Language Models
Valentin Hofmann
Goran Glavavs
Nikola Ljubevsić
J. Pierrehumbert
Hinrich Schütze
VLM
21
16
0
16 Mar 2022
Neural reality of argument structure constructions
Neural reality of argument structure constructions
Bai Li
Zining Zhu
Guillaume Thomas
Frank Rudzicz
Yang Xu
46
26
0
24 Feb 2022
Probing BERT's priors with serial reproduction chains
Probing BERT's priors with serial reproduction chains
Takateru Yamakoshi
Thomas L. Griffiths
Robert D. Hawkins
29
12
0
24 Feb 2022
Towards Property-Based Tests in Natural Language
Towards Property-Based Tests in Natural Language
Colin S. Gordon
ELM
14
2
0
08 Feb 2022
Commonsense Knowledge Reasoning and Generation with Pre-trained Language
  Models: A Survey
Commonsense Knowledge Reasoning and Generation with Pre-trained Language Models: A Survey
Prajjwal Bhargava
Vincent Ng
ReLM
LRM
44
62
0
28 Jan 2022
AI and the Everything in the Whole Wide World Benchmark
AI and the Everything in the Whole Wide World Benchmark
Inioluwa Deborah Raji
Emily M. Bender
Amandalynne Paullada
Emily L. Denton
A. Hanna
30
291
0
26 Nov 2021
Few-shot Named Entity Recognition with Cloze Questions
Few-shot Named Entity Recognition with Cloze Questions
V. Gatta
V. Moscato
Marco Postiglione
Giancarlo Sperlí
27
4
0
24 Nov 2021
Using Distributional Principles for the Semantic Study of Contextual
  Language Models
Using Distributional Principles for the Semantic Study of Contextual Language Models
Olivier Ferret
25
1
0
23 Nov 2021
Recent Advances in Natural Language Processing via Large Pre-Trained
  Language Models: A Survey
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
Bonan Min
Hayley L Ross
Elior Sulem
Amir Pouran Ben Veyseh
Thien Huu Nguyen
Oscar Sainz
Eneko Agirre
Ilana Heinz
Dan Roth
LM&MA
VLM
AI4CE
83
1,035
0
01 Nov 2021
Double Trouble: How to not explain a text classifier's decisions using
  counterfactuals synthesized by masked language models?
Double Trouble: How to not explain a text classifier's decisions using counterfactuals synthesized by masked language models?
Thang M. Pham
Trung H. Bui
Long Mai
Anh Totti Nguyen
21
7
0
22 Oct 2021
ALL Dolphins Are Intelligent and SOME Are Friendly: Probing BERT for
  Nouns' Semantic Properties and their Prototypicality
ALL Dolphins Are Intelligent and SOME Are Friendly: Probing BERT for Nouns' Semantic Properties and their Prototypicality
Marianna Apidianaki
Aina Garí Soler
28
18
0
12 Oct 2021
BeliefBank: Adding Memory to a Pre-Trained Language Model for a
  Systematic Notion of Belief
BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief
Nora Kassner
Oyvind Tafjord
Hinrich Schütze
Peter Clark
KELM
LRM
245
64
0
29 Sep 2021
Analysing the Effect of Masking Length Distribution of MLM: An
  Evaluation Framework and Case Study on Chinese MRC Datasets
Analysing the Effect of Masking Length Distribution of MLM: An Evaluation Framework and Case Study on Chinese MRC Datasets
Changchang Zeng
Shaobo Li
24
6
0
29 Sep 2021
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with
  Controllable Perturbations
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with Controllable Perturbations
Ekaterina Taktasheva
Vladislav Mikhailov
Ekaterina Artemova
24
13
0
28 Sep 2021
AES Systems Are Both Overstable And Oversensitive: Explaining Why And
  Proposing Defenses
AES Systems Are Both Overstable And Oversensitive: Explaining Why And Proposing Defenses
Yaman Kumar Singla
Swapnil Parekh
Somesh Singh
J. Li
R. Shah
Changyou Chen
AAML
41
14
0
24 Sep 2021
Awakening Latent Grounding from Pretrained Language Models for Semantic
  Parsing
Awakening Latent Grounding from Pretrained Language Models for Semantic Parsing
Qian Liu
Dejian Yang
Jiahui Zhang
Jiaqi Guo
Bin Zhou
Jian-Guang Lou
51
41
0
22 Sep 2021
Transformers in the loop: Polarity in neural models of language
Transformers in the loop: Polarity in neural models of language
Lisa Bylinina
Alexey Tikhonov
38
0
0
08 Sep 2021
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
Albert Webson
Ellie Pavlick
LRM
53
355
0
02 Sep 2021
Differentiable Subset Pruning of Transformer Heads
Differentiable Subset Pruning of Transformer Heads
Jiaoda Li
Ryan Cotterell
Mrinmaya Sachan
45
53
0
10 Aug 2021
Local Structure Matters Most: Perturbation Study in NLU
Local Structure Matters Most: Perturbation Study in NLU
Louis Clouâtre
Prasanna Parthasarathi
Amal Zouaq
Sarath Chandar
30
13
0
29 Jul 2021
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods
  in Natural Language Processing
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLM
SyDa
46
3,838
0
28 Jul 2021
Different kinds of cognitive plausibility: why are transformers better
  than RNNs at predicting N400 amplitude?
Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?
J. Michaelov
Megan D. Bardolph
S. Coulson
Benjamin Bergen
18
22
0
20 Jul 2021
Automatic Construction of Evaluation Suites for Natural Language
  Generation Datasets
Automatic Construction of Evaluation Suites for Natural Language Generation Datasets
Simon Mille
Kaustubh D. Dhole
Saad Mahamood
Laura Perez-Beltrachini
Varun Gangal
Mihir Kale
Emiel van Miltenburg
Sebastian Gehrmann
ELM
42
22
0
16 Jun 2021
BERT Embeddings for Automatic Readability Assessment
BERT Embeddings for Automatic Readability Assessment
Joseph Marvin Imperial
18
36
0
15 Jun 2021
Pre-Trained Models: Past, Present and Future
Pre-Trained Models: Past, Present and Future
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFin
MQ
AI4MH
40
815
0
14 Jun 2021
How is BERT surprised? Layerwise detection of linguistic anomalies
How is BERT surprised? Layerwise detection of linguistic anomalies
Bai Li
Zining Zhu
Guillaume Thomas
Yang Xu
Frank Rudzicz
27
31
0
16 May 2021
Back to Square One: Artifact Detection, Training and Commonsense
  Disentanglement in the Winograd Schema
Back to Square One: Artifact Detection, Training and Commonsense Disentanglement in the Winograd Schema
Yanai Elazar
Hongming Zhang
Yoav Goldberg
Dan Roth
ReLM
LRM
45
44
0
16 Apr 2021
Syntactic Perturbations Reveal Representational Correlates of
  Hierarchical Phrase Structure in Pretrained Language Models
Syntactic Perturbations Reveal Representational Correlates of Hierarchical Phrase Structure in Pretrained Language Models
Matteo Alleman
J. Mamou
Miguel Rio
Hanlin Tang
Yoon Kim
SueYeon Chung
NAI
35
17
0
15 Apr 2021
Masked Language Modeling and the Distributional Hypothesis: Order Word
  Matters Pre-training for Little
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little
Koustuv Sinha
Robin Jia
Dieuwke Hupkes
J. Pineau
Adina Williams
Douwe Kiela
45
243
0
14 Apr 2021
Semantic maps and metrics for science Semantic maps and metrics for
  science using deep transformer encoders
Semantic maps and metrics for science Semantic maps and metrics for science using deep transformer encoders
Brendan Chambers
James A. Evans
MedIm
13
0
0
13 Apr 2021
Explaining the Road Not Taken
Explaining the Road Not Taken
Hua Shen
Ting-Hao 'Kenneth' Huang
FAtt
XAI
27
9
0
27 Mar 2021
Bertinho: Galician BERT Representations
Bertinho: Galician BERT Representations
David Vilares
Marcos Garcia
Carlos Gómez-Rodríguez
65
22
0
25 Mar 2021
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
269
346
0
01 Feb 2021
Previous
123
Next