ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.10068
  4. Cited By
Lerna: Transformer Architectures for Configuring Error Correction Tools
  for Short- and Long-Read Genome Sequencing

Lerna: Transformer Architectures for Configuring Error Correction Tools for Short- and Long-Read Genome Sequencing

19 December 2021
Atul Sharma
Pranjali Jain
Ashraf Y. Mahgoub
Zihan Zhou
K. Mahadik
Somali Chaterji
ArXivPDFHTML

Papers citing "Lerna: Transformer Architectures for Configuring Error Correction Tools for Short- and Long-Read Genome Sequencing"

5 / 5 papers shown
Title
A Deep Reinforced Model for Abstractive Summarization
A Deep Reinforced Model for Abstractive Summarization
Romain Paulus
Caiming Xiong
R. Socher
AI4TS
175
1,556
0
11 May 2017
A Structured Self-attentive Sentence Embedding
A Structured Self-attentive Sentence Embedding
Zhouhan Lin
Minwei Feng
Cicero Nogueira dos Santos
Mo Yu
Bing Xiang
Bowen Zhou
Yoshua Bengio
113
2,136
0
09 Mar 2017
A Decomposable Attention Model for Natural Language Inference
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
331
1,374
0
06 Jun 2016
Effective Approaches to Attention-based Neural Machine Translation
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
350
7,956
0
17 Aug 2015
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence
  Modeling
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling
Junyoung Chung
Çağlar Gülçehre
Kyunghyun Cho
Yoshua Bengio
454
12,680
0
11 Dec 2014
1