ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.08855
  4. Cited By
Linguistic Knowledge and Transferability of Contextual Representations

Linguistic Knowledge and Transferability of Contextual Representations

21 March 2019
Nelson F. Liu
Matt Gardner
Yonatan Belinkov
Matthew E. Peters
Noah A. Smith
ArXivPDFHTML

Papers citing "Linguistic Knowledge and Transferability of Contextual Representations"

50 / 454 papers shown
Title
NeuroX Library for Neuron Analysis of Deep NLP Models
NeuroX Library for Neuron Analysis of Deep NLP Models
Fahim Dalvi
Hassan Sajjad
Nadir Durrani
25
10
0
26 May 2023
Backpack Language Models
Backpack Language Models
John Hewitt
John Thickstun
Christopher D. Manning
Percy Liang
KELM
13
16
0
26 May 2023
Out-of-Distribution Generalization in Text Classification: Past,
  Present, and Future
Out-of-Distribution Generalization in Text Classification: Past, Present, and Future
Linyi Yang
Y. Song
Xuan Ren
Chenyang Lyu
Yidong Wang
Lingqiao Liu
Jindong Wang
Jennifer Foster
Yue Zhang
OOD
32
2
0
23 May 2023
Can Language Models Understand Physical Concepts?
Can Language Models Understand Physical Concepts?
Lei Li
Jingjing Xu
Qingxiu Dong
Ce Zheng
Qi Liu
Lingpeng Kong
Xu Sun
ALM
25
18
0
23 May 2023
Can LLMs facilitate interpretation of pre-trained language models?
Can LLMs facilitate interpretation of pre-trained language models?
Basel Mousi
Nadir Durrani
Fahim Dalvi
36
12
0
22 May 2023
Can NLP Models Correctly Reason Over Contexts that Break the Common
  Assumptions?
Can NLP Models Correctly Reason Over Contexts that Break the Common Assumptions?
Neeraj Varshney
Mihir Parmar
Nisarg Patel
Divij Handa
Sayantan Sarkar
Man Luo
Chitta Baral
LRM
31
4
0
20 May 2023
Learning to Generalize for Cross-domain QA
Learning to Generalize for Cross-domain QA
Yingjie Niu
Linyi Yang
Ruihai Dong
Yue Zhang
18
6
0
14 May 2023
Harvesting Event Schemas from Large Language Models
Harvesting Event Schemas from Large Language Models
Jialong Tang
Hongyu Lin
Zhuoqun Li
Yaojie Lu
Xianpei Han
Le Sun
23
4
0
12 May 2023
The EarlyBIRD Catches the Bug: On Exploiting Early Layers of Encoder
  Models for More Efficient Code Classification
The EarlyBIRD Catches the Bug: On Exploiting Early Layers of Encoder Models for More Efficient Code Classification
Anastasiia Grishina
Max Hort
Leon Moonen
22
6
0
08 May 2023
Analyzing the Generalizability of Deep Contextualized Language
  Representations For Text Classification
Analyzing the Generalizability of Deep Contextualized Language Representations For Text Classification
Berfu Buyukoz
28
2
0
22 Mar 2023
Jump to Conclusions: Short-Cutting Transformers With Linear
  Transformations
Jump to Conclusions: Short-Cutting Transformers With Linear Transformations
Alexander Yom Din
Taelin Karidi
Leshem Choshen
Mor Geva
17
57
0
16 Mar 2023
Attention-likelihood relationship in transformers
Attention-likelihood relationship in transformers
Valeria Ruscio
Valentino Maiorca
Fabrizio Silvestri
13
1
0
15 Mar 2023
The Life Cycle of Knowledge in Big Language Models: A Survey
The Life Cycle of Knowledge in Big Language Models: A Survey
Boxi Cao
Hongyu Lin
Xianpei Han
Le Sun
KELM
30
27
0
14 Mar 2023
Probing Graph Representations
Probing Graph Representations
Mohammad Sadegh Akhondzadeh
Vijay Lingam
Aleksandar Bojchevski
36
10
0
07 Mar 2023
Can BERT Refrain from Forgetting on Sequential Tasks? A Probing Study
Can BERT Refrain from Forgetting on Sequential Tasks? A Probing Study
Mingxu Tao
Yansong Feng
Dongyan Zhao
CLL
KELM
26
10
0
02 Mar 2023
Reanalyzing L2 Preposition Learning with Bayesian Mixed Effects and a
  Pretrained Language Model
Reanalyzing L2 Preposition Learning with Bayesian Mixed Effects and a Pretrained Language Model
Jakob Prange
Man Ho Ivy Wong
24
2
0
16 Feb 2023
COMBO: A Complete Benchmark for Open KG Canonicalization
COMBO: A Complete Benchmark for Open KG Canonicalization
Chengyue Jiang
Yong-jia Jiang
Weiqi Wu
Yuting Zheng
Pengjun Xie
Kewei Tu
26
2
0
08 Feb 2023
Cluster-Level Contrastive Learning for Emotion Recognition in
  Conversations
Cluster-Level Contrastive Learning for Emotion Recognition in Conversations
Kailai Yang
Tianlin Zhang
Hassan Alhuzali
Sophia Ananiadou
29
43
0
07 Feb 2023
An Empirical Study on the Transferability of Transformer Modules in
  Parameter-Efficient Fine-Tuning
An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning
Mohammad AkbarTajari
S. Rajaee
Mohammad Taher Pilehvar
11
2
0
01 Feb 2023
The geometry of hidden representations of large transformer models
The geometry of hidden representations of large transformer models
L. Valeriani
Diego Doimo
F. Cuturello
A. Laio
A. Ansuini
Alberto Cazzaniga
MILM
21
48
0
01 Feb 2023
Protein Representation Learning via Knowledge Enhanced Primary Structure
  Modeling
Protein Representation Learning via Knowledge Enhanced Primary Structure Modeling
Hong-Yu Zhou
Yunxiang Fu
Zhicheng Zhang
Cheng Bian
Yizhou Yu
22
8
0
30 Jan 2023
Evaluating Neuron Interpretation Methods of NLP Models
Evaluating Neuron Interpretation Methods of NLP Models
Yimin Fan
Fahim Dalvi
Nadir Durrani
Hassan Sajjad
37
8
0
30 Jan 2023
Can We Use Probing to Better Understand Fine-tuning and Knowledge
  Distillation of the BERT NLU?
Can We Use Probing to Better Understand Fine-tuning and Knowledge Distillation of the BERT NLU?
Jakub Ho'scilowicz
Marcin Sowanski
Piotr Czubowski
Artur Janicki
25
2
0
27 Jan 2023
Interpretability in Activation Space Analysis of Transformers: A Focused
  Survey
Interpretability in Activation Space Analysis of Transformers: A Focused Survey
Soniya Vijayakumar
AI4CE
27
3
0
22 Jan 2023
Cross-Linguistic Syntactic Difference in Multilingual BERT: How Good is
  It and How Does It Affect Transfer?
Cross-Linguistic Syntactic Difference in Multilingual BERT: How Good is It and How Does It Affect Transfer?
Ningyu Xu
Tao Gui
Ruotian Ma
Qi Zhang
Jingting Ye
Menghan Zhang
Xuanjing Huang
30
13
0
21 Dec 2022
Analyzing Semantic Faithfulness of Language Models via Input
  Intervention on Question Answering
Analyzing Semantic Faithfulness of Language Models via Input Intervention on Question Answering
Akshay Chaturvedi
Swarnadeep Bhar
Soumadeep Saha
Utpal Garain
Nicholas Asher
33
4
0
21 Dec 2022
G-MAP: General Memory-Augmented Pre-trained Language Model for Domain
  Tasks
G-MAP: General Memory-Augmented Pre-trained Language Model for Domain Tasks
Zhongwei Wan
Yichun Yin
Wei Zhang
Jiaxin Shi
Lifeng Shang
Guangyong Chen
Xin Jiang
Qun Liu
VLM
CLL
28
16
0
07 Dec 2022
On the Effect of Pre-training for Transformer in Different Modality on
  Offline Reinforcement Learning
On the Effect of Pre-training for Transformer in Different Modality on Offline Reinforcement Learning
S. Takagi
OffRL
18
7
0
17 Nov 2022
Prompting Language Models for Linguistic Structure
Prompting Language Models for Linguistic Structure
Terra Blevins
Hila Gonen
Luke Zettlemoyer
LRM
27
39
0
15 Nov 2022
Finding Skill Neurons in Pre-trained Transformer-based Language Models
Finding Skill Neurons in Pre-trained Transformer-based Language Models
Xiaozhi Wang
Kaiyue Wen
Zhengyan Zhang
Lei Hou
Zhiyuan Liu
Juanzi Li
MILM
MoE
21
50
0
14 Nov 2022
ConceptX: A Framework for Latent Concept Analysis
ConceptX: A Framework for Latent Concept Analysis
Firoj Alam
Fahim Dalvi
Nadir Durrani
Hassan Sajjad
A. Khan
Jia Xu
20
5
0
12 Nov 2022
The Architectural Bottleneck Principle
The Architectural Bottleneck Principle
Tiago Pimentel
Josef Valvoda
Niklas Stoehr
Ryan Cotterell
25
5
0
11 Nov 2022
A Survey of Knowledge Enhanced Pre-trained Language Models
A Survey of Knowledge Enhanced Pre-trained Language Models
Linmei Hu
Zeyi Liu
Ziwang Zhao
Lei Hou
Liqiang Nie
Juanzi Li
KELM
VLM
24
121
0
11 Nov 2022
Impact of Adversarial Training on Robustness and Generalizability of
  Language Models
Impact of Adversarial Training on Robustness and Generalizability of Language Models
Enes Altinisik
Hassan Sajjad
H. Sencar
Safa Messaoud
Sanjay Chawla
AAML
11
8
0
10 Nov 2022
LERT: A Linguistically-motivated Pre-trained Language Model
LERT: A Linguistically-motivated Pre-trained Language Model
Yiming Cui
Wanxiang Che
Shijin Wang
Ting Liu
23
24
0
10 Nov 2022
COPEN: Probing Conceptual Knowledge in Pre-trained Language Models
COPEN: Probing Conceptual Knowledge in Pre-trained Language Models
Hao Peng
Xiaozhi Wang
Shengding Hu
Hailong Jin
Lei Hou
Juanzi Li
Zhiyuan Liu
Qun Liu
18
22
0
08 Nov 2022
How Much Does Attention Actually Attend? Questioning the Importance of
  Attention in Pretrained Transformers
How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Michael Hassid
Hao Peng
Daniel Rotem
Jungo Kasai
Ivan Montero
Noah A. Smith
Roy Schwartz
32
24
0
07 Nov 2022
Parameter-Efficient Tuning Makes a Good Classification Head
Parameter-Efficient Tuning Makes a Good Classification Head
Zhuoyi Yang
Ming Ding
Yanhui Guo
Qingsong Lv
Jie Tang
VLM
35
14
0
30 Oct 2022
Probing for targeted syntactic knowledge through grammatical error
  detection
Probing for targeted syntactic knowledge through grammatical error detection
Christopher Davis
Christopher Bryant
Andrew Caines
Marek Rei
P. Buttery
22
3
0
28 Oct 2022
Leveraging Open Data and Task Augmentation to Automated Behavioral
  Coding of Psychotherapy Conversations in Low-Resource Scenarios
Leveraging Open Data and Task Augmentation to Automated Behavioral Coding of Psychotherapy Conversations in Low-Resource Scenarios
Zhuohao Chen
Nikolaos Flemotomos
Zac E. Imel
David C. Atkins
Shrikanth Narayanan
18
4
0
25 Oct 2022
Exploring Mode Connectivity for Pre-trained Language Models
Exploring Mode Connectivity for Pre-trained Language Models
Yujia Qin
Cheng Qian
Jing Yi
Weize Chen
Yankai Lin
Xu Han
Zhiyuan Liu
Maosong Sun
Jie Zhou
29
20
0
25 Oct 2022
On the Transformation of Latent Space in Fine-Tuned NLP Models
On the Transformation of Latent Space in Fine-Tuned NLP Models
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
Firoj Alam
32
18
0
23 Oct 2022
Leveraging Large Language Models for Multiple Choice Question Answering
Leveraging Large Language Models for Multiple Choice Question Answering
Joshua Robinson
Christopher Rytting
David Wingate
ELM
143
186
0
22 Oct 2022
Probing with Noise: Unpicking the Warp and Weft of Embeddings
Probing with Noise: Unpicking the Warp and Weft of Embeddings
Filip Klubicka
John D. Kelleher
30
4
0
21 Oct 2022
Hidden State Variability of Pretrained Language Models Can Guide
  Computation Reduction for Transfer Learning
Hidden State Variability of Pretrained Language Models Can Guide Computation Reduction for Transfer Learning
Shuo Xie
Jiahao Qiu
Ankita Pasad
Li Du
Qing Qu
Hongyuan Mei
35
16
0
18 Oct 2022
Post-hoc analysis of Arabic transformer models
Post-hoc analysis of Arabic transformer models
Ahmed Abdelali
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
15
1
0
18 Oct 2022
Can Language Representation Models Think in Bets?
Can Language Representation Models Think in Bets?
Zhi–Bin Tang
M. Kejriwal
15
6
0
14 Oct 2022
Transparency Helps Reveal When Language Models Learn Meaning
Transparency Helps Reveal When Language Models Learn Meaning
Zhaofeng Wu
William Merrill
Hao Peng
Iz Beltagy
Noah A. Smith
19
9
0
14 Oct 2022
Predicting Fine-Tuning Performance with Probing
Predicting Fine-Tuning Performance with Probing
Zining Zhu
Soroosh Shahtalebi
Frank Rudzicz
30
9
0
13 Oct 2022
Understanding Prior Bias and Choice Paralysis in Transformer-based
  Language Representation Models through Four Experimental Probes
Understanding Prior Bias and Choice Paralysis in Transformer-based Language Representation Models through Four Experimental Probes
Ke Shen
M. Kejriwal
27
4
0
03 Oct 2022
Previous
123456...8910
Next