ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.10444
  4. Cited By
No Training Required: Exploring Random Encoders for Sentence
  Classification

No Training Required: Exploring Random Encoders for Sentence Classification

29 January 2019
John Wieting
Douwe Kiela
ArXivPDFHTML

Papers citing "No Training Required: Exploring Random Encoders for Sentence Classification"

50 / 69 papers shown
Title
Identifying and Mitigating the Influence of the Prior Distribution in Large Language Models
Identifying and Mitigating the Influence of the Prior Distribution in Large Language Models
Liyi Zhang
Veniamin Veselovsky
R. Thomas McCoy
Thomas L. Griffiths
58
0
0
17 Apr 2025
Syntactic Learnability of Echo State Neural Language Models at Scale
Ryo Ueda
Tatsuki Kuribayashi
Shunsuke Kando
Kentaro Inui
61
0
0
03 Mar 2025
Text Classification using Graph Convolutional Networks: A Comprehensive
  Survey
Text Classification using Graph Convolutional Networks: A Comprehensive Survey
Syed Mustafa Haider Rizvi
Ramsha Imran
Arif Mahmood
GNN
OOD
FaML
26
0
0
12 Oct 2024
Private Language Models via Truncated Laplacian Mechanism
Private Language Models via Truncated Laplacian Mechanism
Tianhao Huang
Tao Yang
Ivan Habernal
Lijie Hu
Di Wang
35
1
0
10 Oct 2024
Mechanistic?
Mechanistic?
Naomi Saphra
Sarah Wiegreffe
AI4CE
29
9
0
07 Oct 2024
Establishing Deep InfoMax as an effective self-supervised learning
  methodology in materials informatics
Establishing Deep InfoMax as an effective self-supervised learning methodology in materials informatics
Michael Moran
Vladimir V. Gusev
M. Gaultois
Dmytro Antypov
M. Rosseinsky
AI4CE
33
0
0
30 Jun 2024
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
33
20
0
14 Feb 2023
Emergence of Maps in the Memories of Blind Navigation Agents
Emergence of Maps in the Memories of Blind Navigation Agents
Erik Wijmans
Manolis Savva
Irfan Essa
Stefan Lee
Ari S. Morcos
Dhruv Batra
31
27
0
30 Jan 2023
Probing of Quantitative Values in Abstractive Summarization Models
Probing of Quantitative Values in Abstractive Summarization Models
Nathan M. White
13
0
0
03 Oct 2022
EigenNoise: A Contrastive Prior to Warm-Start Representations
EigenNoise: A Contrastive Prior to Warm-Start Representations
H. Heidenreich
Jake Williams
13
1
0
09 May 2022
Knowledge Distillation of Russian Language Models with Reduction of
  Vocabulary
Knowledge Distillation of Russian Language Models with Reduction of Vocabulary
A. Kolesnikova
Yuri Kuratov
Vasily Konovalov
Andrey Kravchenko
VLM
29
10
0
04 May 2022
To Know by the Company Words Keep and What Else Lies in the Vicinity
To Know by the Company Words Keep and What Else Lies in the Vicinity
Jake Williams
H. Heidenreich
16
0
0
30 Apr 2022
Empirical Evaluation and Theoretical Analysis for Representation
  Learning: A Survey
Empirical Evaluation and Theoretical Analysis for Representation Learning: A Survey
Kento Nozawa
Issei Sato
AI4TS
21
4
0
18 Apr 2022
Identifying stimulus-driven neural activity patterns in multi-patient
  intracranial recordings
Identifying stimulus-driven neural activity patterns in multi-patient intracranial recordings
Jeremy R. Manning
13
0
0
04 Feb 2022
An Isotropy Analysis in the Multilingual BERT Embedding Space
An Isotropy Analysis in the Multilingual BERT Embedding Space
S. Rajaee
Mohammad Taher Pilehvar
16
32
0
09 Oct 2021
Efficient and Private Federated Learning with Partially Trainable
  Networks
Efficient and Private Federated Learning with Partially Trainable Networks
Hakim Sidahmed
Zheng Xu
Ankush Garg
Yuan Cao
Mingqing Chen
FedML
49
13
0
06 Oct 2021
General Cross-Architecture Distillation of Pretrained Language Models
  into Matrix Embeddings
General Cross-Architecture Distillation of Pretrained Language Models into Matrix Embeddings
Lukas Galke
Isabelle Cuber
Christophe Meyer
Henrik Ferdinand Nolscher
Angelina Sonderecker
A. Scherp
36
2
0
17 Sep 2021
What's Hidden in a One-layer Randomly Weighted Transformer?
What's Hidden in a One-layer Randomly Weighted Transformer?
Sheng Shen
Z. Yao
Douwe Kiela
Kurt Keutzer
Michael W. Mahoney
32
4
0
08 Sep 2021
Text Classification and Clustering with Annealing Soft Nearest Neighbor
  Loss
Text Classification and Clustering with Annealing Soft Nearest Neighbor Loss
Abien Fred Agarap
DRL
6
0
0
23 Jul 2021
COM2SENSE: A Commonsense Reasoning Benchmark with Complementary
  Sentences
COM2SENSE: A Commonsense Reasoning Benchmark with Complementary Sentences
Shikhar Singh
Nuan Wen
Yu Hou
Pegah Alipoormolabashi
Te-Lin Wu
Xuezhe Ma
Nanyun Peng
LRM
21
57
0
02 Jun 2021
Random Embeddings and Linear Regression can Predict Protein Function
Random Embeddings and Linear Regression can Predict Protein Function
Tianyu Lu
Alex X. Lu
Alan M. Moses
SSL
22
5
0
25 Apr 2021
Sentence Embeddings by Ensemble Distillation
Sentence Embeddings by Ensemble Distillation
Fredrik Carlsson Magnus Sahlgren
9
1
0
14 Apr 2021
Masked Language Modeling and the Distributional Hypothesis: Order Word
  Matters Pre-training for Little
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little
Koustuv Sinha
Robin Jia
Dieuwke Hupkes
J. Pineau
Adina Williams
Douwe Kiela
45
243
0
14 Apr 2021
Cost-effective Deployment of BERT Models in Serverless Environment
Cost-effective Deployment of BERT Models in Serverless Environment
Katarína Benesová
Andrej Svec
Marek Suppa
20
2
0
19 Mar 2021
State Entropy Maximization with Random Encoders for Efficient
  Exploration
State Entropy Maximization with Random Encoders for Efficient Exploration
Younggyo Seo
Lili Chen
Jinwoo Shin
Honglak Lee
Pieter Abbeel
Kimin Lee
22
121
0
18 Feb 2021
On the Interpretability of Deep Learning Based Models for Knowledge
  Tracing
On the Interpretability of Deep Learning Based Models for Knowledge Tracing
Xinyi Ding
Eric C. Larson
10
8
0
27 Jan 2021
Reservoir Transformers
Reservoir Transformers
Sheng Shen
Alexei Baevski
Ari S. Morcos
Kurt Keutzer
Michael Auli
Douwe Kiela
35
17
0
30 Dec 2020
Understanding Pure Character-Based Neural Machine Translation: The Case
  of Translating Finnish into English
Understanding Pure Character-Based Neural Machine Translation: The Case of Translating Finnish into English
Gongbo Tang
Rico Sennrich
Joakim Nivre
25
7
0
06 Nov 2020
Not all parameters are born equal: Attention is mostly what you need
Not all parameters are born equal: Attention is mostly what you need
Nikolay Bogoychev
MoE
27
7
0
22 Oct 2020
Robust and Generalizable Visual Representation Learning via Random
  Convolutions
Robust and Generalizable Visual Representation Learning via Random Convolutions
Zhenlin Xu
Deyi Liu
Junlin Yang
Colin Raffel
Marc Niethammer
OOD
AAML
49
190
0
25 Jul 2020
Pre-training via Paraphrasing
Pre-training via Paraphrasing
M. Lewis
Marjan Ghazvininejad
Gargi Ghosh
Armen Aghajanyan
Sida I. Wang
Luke Zettlemoyer
AIMat
22
158
0
26 Jun 2020
How to Probe Sentence Embeddings in Low-Resource Languages: On
  Structural Design Choices for Probing Task Evaluation
How to Probe Sentence Embeddings in Low-Resource Languages: On Structural Design Choices for Probing Task Evaluation
Steffen Eger
Johannes Daxenberger
Iryna Gurevych
22
11
0
16 Jun 2020
Human Instruction-Following with Deep Reinforcement Learning via
  Transfer-Learning from Text
Human Instruction-Following with Deep Reinforcement Learning via Transfer-Learning from Text
Felix Hill
Soňa Mokrá
Nathaniel Wong
Tim Harley
LM&Ro
19
81
0
19 May 2020
Similarity Analysis of Contextual Word Representation Models
Similarity Analysis of Contextual Word Representation Models
John M. Wu
Yonatan Belinkov
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
James R. Glass
51
73
0
03 May 2020
Probing the Probing Paradigm: Does Probing Accuracy Entail Task
  Relevance?
Probing the Probing Paradigm: Does Probing Accuracy Entail Task Relevance?
Abhilasha Ravichander
Yonatan Belinkov
Eduard H. Hovy
34
123
0
02 May 2020
Sparse, Dense, and Attentional Representations for Text Retrieval
Sparse, Dense, and Attentional Representations for Text Retrieval
Y. Luan
Jacob Eisenstein
Kristina Toutanova
M. Collins
30
395
0
01 May 2020
Quantifying the Contextualization of Word Representations with Semantic
  Class Probing
Quantifying the Contextualization of Word Representations with Semantic Class Probing
Mengjie Zhao
Philipp Dufter
Yadollah Yaghoobzadeh
Hinrich Schütze
17
27
0
25 Apr 2020
A Revised Generative Evaluation of Visual Dialogue
A Revised Generative Evaluation of Visual Dialogue
Daniela Massiceti
Viveka Kulharia
P. Dokania
N. Siddharth
Philip Torr
20
0
0
20 Apr 2020
Information-Theoretic Probing with Minimum Description Length
Information-Theoretic Probing with Minimum Description Length
Elena Voita
Ivan Titov
21
270
0
27 Mar 2020
How Powerful Are Randomly Initialized Pointcloud Set Functions?
How Powerful Are Randomly Initialized Pointcloud Set Functions?
Aditya Sanghi
P. Jayaraman
3DPC
12
3
0
11 Mar 2020
Echo State Neural Machine Translation
Echo State Neural Machine Translation
Ankush Garg
Yuan Cao
Qi Ge
20
5
0
27 Feb 2020
On the impressive performance of randomly weighted encoders in
  summarization tasks
On the impressive performance of randomly weighted encoders in summarization tasks
Jonathan Pilault
Jaehong Park
C. Pal
BDL
25
5
0
21 Feb 2020
Contextual Lensing of Universal Sentence Representations
Contextual Lensing of Universal Sentence Representations
J. Kiros
13
5
0
20 Feb 2020
Text Classification with Lexicon from PreAttention Mechanism
Text Classification with Lexicon from PreAttention Mechanism
Qingbiao Li
Chunhua Wu
K. Zheng
VLM
17
0
0
18 Feb 2020
Neural Machine Translation: A Review and Survey
Neural Machine Translation: A Review and Survey
Felix Stahlberg
3DV
AI4TS
MedIm
20
311
0
04 Dec 2019
COSTRA 1.0: A Dataset of Complex Sentence Transformations
COSTRA 1.0: A Dataset of Complex Sentence Transformations
P. Barancíková
Ondrej Bojar
18
7
0
03 Dec 2019
Do Attention Heads in BERT Track Syntactic Dependencies?
Do Attention Heads in BERT Track Syntactic Dependencies?
Phu Mon Htut
Jason Phang
Shikha Bordia
Samuel R. Bowman
27
136
0
27 Nov 2019
Unsupervised Natural Question Answering with a Small Model
Unsupervised Natural Question Answering with a Small Model
Martin Andrews
Sam Witteveen
KELM
ELM
19
4
0
19 Nov 2019
What do you mean, BERT? Assessing BERT as a Distributional Semantics
  Model
What do you mean, BERT? Assessing BERT as a Distributional Semantics Model
Timothee Mickus
Denis Paperno
Mathieu Constant
Kees van Deemter
23
45
0
13 Nov 2019
Deep Contextualized Self-training for Low Resource Dependency Parsing
Deep Contextualized Self-training for Low Resource Dependency Parsing
Guy Rotman
Roi Reichart
18
50
0
11 Nov 2019
12
Next