ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.14571
  4. Cited By
From Characters to Words: Hierarchical Pre-trained Language Model for
  Open-vocabulary Language Understanding

From Characters to Words: Hierarchical Pre-trained Language Model for Open-vocabulary Language Understanding

23 May 2023
Li Sun
F. Luisier
Kayhan Batmanghelich
D. Florêncio
Changrong Zhang
    VLM
ArXivPDFHTML

Papers citing "From Characters to Words: Hierarchical Pre-trained Language Model for Open-vocabulary Language Understanding"

8 / 8 papers shown
Title
A Practical Guide to Fine-tuning Language Models with Limited Data
A Practical Guide to Fine-tuning Language Models with Limited Data
Márton Szép
Daniel Rueckert
Rüdiger von Eisenhart-Rothe
Florian Hinterwimmer
SyDa
ALM
49
2
0
14 Nov 2024
Heidelberg-Boston @ SIGTYP 2024 Shared Task: Enhancing Low-Resource
  Language Analysis With Character-Aware Hierarchical Transformers
Heidelberg-Boston @ SIGTYP 2024 Shared Task: Enhancing Low-Resource Language Analysis With Character-Aware Hierarchical Transformers
Frederick Riemenschneider
Kevin Krahn
29
2
0
30 May 2024
Knowledge of Pretrained Language Models on Surface Information of Tokens
Knowledge of Pretrained Language Models on Surface Information of Tokens
Tatsuya Hiraoka
Naoaki Okazaki
32
1
0
15 Feb 2024
Learning Mutually Informed Representations for Characters and Subwords
Learning Mutually Informed Representations for Characters and Subwords
Yilin Wang
Xinyi Hu
Matthew R. Gormley
36
0
0
14 Nov 2023
Char2Subword: Extending the Subword Embedding Space Using Robust
  Character Compositionality
Char2Subword: Extending the Subword Embedding Space Using Robust Character Compositionality
Gustavo Aguilar
Bryan McCann
Tong Niu
Nazneen Rajani
N. Keskar
Thamar Solorio
47
12
0
24 Oct 2020
CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary
  Representations From Characters
CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters
Hicham El Boukkouri
Olivier Ferret
Thomas Lavergne
Hiroshi Noji
Pierre Zweigenbaum
Junichi Tsujii
77
156
0
20 Oct 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
Efficient Estimation of Word Representations in Vector Space
Efficient Estimation of Word Representations in Vector Space
Tomáš Mikolov
Kai Chen
G. Corrado
J. Dean
3DV
269
31,267
0
16 Jan 2013
1