ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.04207
  4. Cited By
Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction
  Tasks

Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks

15 August 2016
Yossi Adi
Einat Kermany
Yonatan Belinkov
Ofer Lavi
Yoav Goldberg
ArXivPDFHTML

Papers citing "Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks"

50 / 122 papers shown
Title
Debiasing Methods in Natural Language Understanding Make Bias More
  Accessible
Debiasing Methods in Natural Language Understanding Make Bias More Accessible
Michael J. Mendelson
Yonatan Belinkov
42
23
0
09 Sep 2021
Neuron-level Interpretation of Deep NLP Models: A Survey
Neuron-level Interpretation of Deep NLP Models: A Survey
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
MILM
AI4CE
35
80
0
30 Aug 2021
What do pre-trained code models know about code?
What do pre-trained code models know about code?
Anjan Karmakar
Romain Robbes
ELM
32
87
0
25 Aug 2021
Pre-Trained Models: Past, Present and Future
Pre-Trained Models: Past, Present and Future
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFin
MQ
AI4MH
40
815
0
14 Jun 2021
How transfer learning impacts linguistic knowledge in deep NLP models?
How transfer learning impacts linguistic knowledge in deep NLP models?
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
13
48
0
31 May 2021
How Reliable are Model Diagnostics?
How Reliable are Model Diagnostics?
V. Aribandi
Yi Tay
Donald Metzler
19
19
0
12 May 2021
Let's Play Mono-Poly: BERT Can Reveal Words' Polysemy Level and
  Partitionability into Senses
Let's Play Mono-Poly: BERT Can Reveal Words' Polysemy Level and Partitionability into Senses
Aina Garí Soler
Marianna Apidianaki
MILM
209
68
0
29 Apr 2021
Provable Limitations of Acquiring Meaning from Ungrounded Form: What
  Will Future Language Models Understand?
Provable Limitations of Acquiring Meaning from Ungrounded Form: What Will Future Language Models Understand?
William Merrill
Yoav Goldberg
Roy Schwartz
Noah A. Smith
17
67
0
22 Apr 2021
Enhancing Cognitive Models of Emotions with Representation Learning
Enhancing Cognitive Models of Emotions with Representation Learning
Yuting Guo
Jinho D. Choi
28
5
0
20 Apr 2021
Probing artificial neural networks: insights from neuroscience
Probing artificial neural networks: insights from neuroscience
Anna A. Ivanova
John Hewitt
Noga Zaslavsky
6
16
0
16 Apr 2021
Local Interpretations for Explainable Natural Language Processing: A
  Survey
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
33
48
0
20 Mar 2021
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
Vassilina Nikoulina
Maxat Tezekbayev
Nuradil Kozhakhmet
Madina Babazhanova
Matthias Gallé
Z. Assylbekov
34
8
0
02 Mar 2021
Probing Classifiers: Promises, Shortcomings, and Advances
Probing Classifiers: Promises, Shortcomings, and Advances
Yonatan Belinkov
226
405
0
24 Feb 2021
Probing Multimodal Embeddings for Linguistic Properties: the
  Visual-Semantic Case
Probing Multimodal Embeddings for Linguistic Properties: the Visual-Semantic Case
Adam Dahlgren Lindström
Suna Bensch
Johanna Björklund
F. Drewes
24
20
0
22 Feb 2021
First Align, then Predict: Understanding the Cross-Lingual Ability of
  Multilingual BERT
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT
Benjamin Muller
Yanai Elazar
Benoît Sagot
Djamé Seddah
LRM
29
71
0
26 Jan 2021
Infusing Finetuning with Semantic Dependencies
Infusing Finetuning with Semantic Dependencies
Zhaofeng Wu
Hao Peng
Noah A. Smith
22
36
0
10 Dec 2020
When Do You Need Billions of Words of Pretraining Data?
When Do You Need Billions of Words of Pretraining Data?
Yian Zhang
Alex Warstadt
Haau-Sing Li
Samuel R. Bowman
23
136
0
10 Nov 2020
Probing Task-Oriented Dialogue Representation from Language Models
Probing Task-Oriented Dialogue Representation from Language Models
Chien-Sheng Wu
Caiming Xiong
13
20
0
26 Oct 2020
Language Models are Open Knowledge Graphs
Language Models are Open Knowledge Graphs
Chenguang Wang
Xiao Liu
D. Song
SSL
KELM
26
135
0
22 Oct 2020
Linguistic Profiling of a Neural Language Model
Linguistic Profiling of a Neural Language Model
Alessio Miaschi
D. Brunato
F. Dell’Orletta
Giulia Venturi
36
46
0
05 Oct 2020
Examining the rhetorical capacities of neural language models
Examining the rhetorical capacities of neural language models
Zining Zhu
Chuer Pan
Mohamed Abdalla
Frank Rudzicz
28
10
0
01 Oct 2020
Bridging Information-Seeking Human Gaze and Machine Reading
  Comprehension
Bridging Information-Seeking Human Gaze and Machine Reading Comprehension
J. Malmaud
R. Levy
Yevgeni Berzak
22
31
0
30 Sep 2020
Multi-timescale Representation Learning in LSTM Language Models
Multi-timescale Representation Learning in LSTM Language Models
Shivangi Mahto
Vy A. Vo
Javier S. Turek
Alexander G. Huth
15
29
0
27 Sep 2020
BERTology Meets Biology: Interpreting Attention in Protein Language
  Models
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig
Ali Madani
L. Varshney
Caiming Xiong
R. Socher
Nazneen Rajani
29
288
0
26 Jun 2020
What shapes feature representations? Exploring datasets, architectures,
  and training
What shapes feature representations? Exploring datasets, architectures, and training
Katherine L. Hermann
Andrew Kyle Lampinen
OOD
23
153
0
22 Jun 2020
Probing Neural Dialog Models for Conversational Understanding
Probing Neural Dialog Models for Conversational Understanding
Abdelrhman Saleh
Tovly Deutsch
Stephen Casper
Yonatan Belinkov
Stuart M. Shieber
21
13
0
07 Jun 2020
Syntactic Structure Distillation Pretraining For Bidirectional Encoders
Syntactic Structure Distillation Pretraining For Bidirectional Encoders
A. Kuncoro
Lingpeng Kong
Daniel Fried
Dani Yogatama
Laura Rimell
Chris Dyer
Phil Blunsom
31
33
0
27 May 2020
CausaLM: Causal Model Explanation Through Counterfactual Language Models
CausaLM: Causal Model Explanation Through Counterfactual Language Models
Amir Feder
Nadav Oved
Uri Shalit
Roi Reichart
CML
LRM
41
156
0
27 May 2020
On the Robustness of Language Encoders against Grammatical Errors
On the Robustness of Language Encoders against Grammatical Errors
Fan Yin
Quanyu Long
Tao Meng
Kai-Wei Chang
31
34
0
12 May 2020
The Cascade Transformer: an Application for Efficient Answer Sentence
  Selection
The Cascade Transformer: an Application for Efficient Answer Sentence Selection
Luca Soldaini
Alessandro Moschitti
24
44
0
05 May 2020
Spying on your neighbors: Fine-grained probing of contextual embeddings
  for information about surrounding words
Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
Josef Klafka
Allyson Ettinger
51
42
0
04 May 2020
Probing Contextual Language Models for Common Ground with Visual
  Representations
Probing Contextual Language Models for Common Ground with Visual Representations
Gabriel Ilharco
Rowan Zellers
Ali Farhadi
Hannaneh Hajishirzi
30
14
0
01 May 2020
Probing Linguistic Features of Sentence-Level Representations in Neural
  Relation Extraction
Probing Linguistic Features of Sentence-Level Representations in Neural Relation Extraction
Christoph Alt
Aleksandra Gabryszak
Leonhard Hennig
NAI
13
34
0
17 Apr 2020
An empirical analysis of information encoded in disentangled neural
  speaker representations
An empirical analysis of information encoded in disentangled neural speaker representations
Raghuveer Peri
Haoqi Li
Krishna Somandepalli
Arindam Jati
Shrikanth Narayanan
DRL
27
13
0
10 Feb 2020
oLMpics -- On what Language Model Pre-training Captures
oLMpics -- On what Language Model Pre-training Captures
Alon Talmor
Yanai Elazar
Yoav Goldberg
Jonathan Berant
LRM
19
300
0
31 Dec 2019
On the Linguistic Representational Power of Neural Machine Translation
  Models
On the Linguistic Representational Power of Neural Machine Translation Models
Yonatan Belinkov
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
James R. Glass
MILM
33
68
0
01 Nov 2019
Evaluation of Sentence Representations in Polish
Evaluation of Sentence Representations in Polish
Slawomir Dadas
Michal Perelkiewicz
Rafal Poswiata
16
13
0
25 Oct 2019
Discovering the Compositional Structure of Vector Representations with
  Role Learning Networks
Discovering the Compositional Structure of Vector Representations with Role Learning Networks
Paul Soulos
R. Thomas McCoy
Tal Linzen
P. Smolensky
CoGe
29
43
0
21 Oct 2019
What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb
  Constructions?
What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb Constructions?
Miryam de Lhoneux
Sara Stymne
Joakim Nivre
12
3
0
18 Jul 2019
Analyzing Phonetic and Graphemic Representations in End-to-End Automatic
  Speech Recognition
Analyzing Phonetic and Graphemic Representations in End-to-End Automatic Speech Recognition
Yonatan Belinkov
Ahmed M. Ali
James R. Glass
25
32
0
09 Jul 2019
Learning Compressed Sentence Representations for On-Device Text
  Processing
Learning Compressed Sentence Representations for On-Device Text Processing
Dinghan Shen
Pengyu Cheng
Dhanasekar Sundararaman
Xinyuan Zhang
Qian Yang
Meng Tang
Asli Celikyilmaz
Lawrence Carin
18
22
0
19 Jun 2019
Scalable Syntax-Aware Language Models Using Knowledge Distillation
Scalable Syntax-Aware Language Models Using Knowledge Distillation
A. Kuncoro
Chris Dyer
Laura Rimell
S. Clark
Phil Blunsom
35
26
0
14 Jun 2019
Putting words in context: LSTM language models and lexical ambiguity
Putting words in context: LSTM language models and lexical ambiguity
Laura Aina
Kristina Gulordava
Gemma Boleda
13
40
0
12 Jun 2019
What Does BERT Look At? An Analysis of BERT's Attention
What Does BERT Look At? An Analysis of BERT's Attention
Kevin Clark
Urvashi Khandelwal
Omer Levy
Christopher D. Manning
MILM
66
1,581
0
11 Jun 2019
Pretraining Methods for Dialog Context Representation Learning
Pretraining Methods for Dialog Context Representation Learning
Shikib Mehri
E. Razumovskaia
Tiancheng Zhao
M. Eskénazi
16
84
0
02 Jun 2019
What do you learn from context? Probing for sentence structure in
  contextualized word representations
What do you learn from context? Probing for sentence structure in contextualized word representations
Ian Tenney
Patrick Xia
Berlin Chen
Alex Jinpeng Wang
Adam Poliak
...
Najoung Kim
Benjamin Van Durme
Samuel R. Bowman
Dipanjan Das
Ellie Pavlick
91
848
0
15 May 2019
Probing What Different NLP Tasks Teach Machines about Function Word
  Comprehension
Probing What Different NLP Tasks Teach Machines about Function Word Comprehension
Najoung Kim
Roma Patel
Adam Poliak
Alex Jinpeng Wang
Patrick Xia
...
Alexis Ross
Tal Linzen
Benjamin Van Durme
Samuel R. Bowman
Ellie Pavlick
22
105
0
25 Apr 2019
Evaluating the Representational Hub of Language and Vision Models
Evaluating the Representational Hub of Language and Vision Models
Ravi Shekhar
Ece Takmaz
Raquel Fernández
Raffaella Bernardi
27
11
0
12 Apr 2019
Linguistic Knowledge and Transferability of Contextual Representations
Linguistic Knowledge and Transferability of Contextual Representations
Nelson F. Liu
Matt Gardner
Yonatan Belinkov
Matthew E. Peters
Noah A. Smith
52
717
0
21 Mar 2019
Trick or TReAT: Thematic Reinforcement for Artistic Typography
Trick or TReAT: Thematic Reinforcement for Artistic Typography
Purva Tendulkar
Kalpesh Krishna
Ramprasaath R. Selvaraju
Devi Parikh
13
13
0
19 Mar 2019
Previous
123
Next