ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.05412
  4. Cited By
MNet-Sim: A Multi-layered Semantic Similarity Network to Evaluate
  Sentence Similarity

MNet-Sim: A Multi-layered Semantic Similarity Network to Evaluate Sentence Similarity

9 November 2021
Manuela Nayantara Jeyaraj
D. Kasthurirathna
ArXiv (abs)PDFHTML

Papers citing "MNet-Sim: A Multi-layered Semantic Similarity Network to Evaluate Sentence Similarity"

11 / 11 papers shown
Title
RealFormer: Transformer Likes Residual Attention
RealFormer: Transformer Likes Residual Attention
Ruining He
Anirudh Ravula
Bhargav Kanagal
Joshua Ainslie
74
110
0
21 Dec 2020
Generalized Word Shift Graphs: A Method for Visualizing and Explaining
  Pairwise Comparisons Between Texts
Generalized Word Shift Graphs: A Method for Visualizing and Explaining Pairwise Comparisons Between Texts
Ryan J. Gallagher
M. Frank
Lewis Mitchell
A. Schwartz
A. J. Reagan
C. Danforth
P. Dodds
61
67
0
05 Aug 2020
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language
  Models through Principled Regularized Optimization
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Haoming Jiang
Pengcheng He
Weizhu Chen
Xiaodong Liu
Jianfeng Gao
T. Zhao
120
564
0
08 Nov 2019
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
503
20,342
0
23 Oct 2019
ALBERT: A Lite BERT for Self-supervised Learning of Language
  Representations
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Zhenzhong Lan
Mingda Chen
Sebastian Goodman
Kevin Gimpel
Piyush Sharma
Radu Soricut
SSLAIMat
373
6,472
0
26 Sep 2019
StructBERT: Incorporating Language Structures into Pre-training for Deep
  Language Understanding
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
Wei Wang
Bin Bi
Ming Yan
Chen Henry Wu
Zuyi Bao
Jiangnan Xia
Liwei Peng
Luo Si
74
264
0
13 Aug 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLMSSLSSeg
1.8K
95,324
0
11 Oct 2018
Self-Attention with Relative Position Representations
Self-Attention with Relative Position Representations
Peter Shaw
Jakob Uszkoreit
Ashish Vaswani
184
2,304
0
06 Mar 2018
Deep contextualized word representations
Deep contextualized word representations
Matthew E. Peters
Mark Neumann
Mohit Iyyer
Matt Gardner
Christopher Clark
Kenton Lee
Luke Zettlemoyer
NAI
233
11,565
0
15 Feb 2018
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
805
132,725
0
12 Jun 2017
Cosine Normalization: Using Cosine Similarity Instead of Dot Product in
  Neural Networks
Cosine Normalization: Using Cosine Similarity Instead of Dot Product in Neural Networks
Chunjie Luo
Jianfeng Zhan
Lei Wang
Qiang Yang
89
203
0
20 Feb 2017
1