ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.14109
  4. Cited By
Improving Low Compute Language Modeling with In-Domain Embedding
  Initialisation

Improving Low Compute Language Modeling with In-Domain Embedding Initialisation

29 September 2020
Charles F Welch
Rada Mihalcea
Jonathan K. Kummerfeld
    AI4CE
ArXivPDFHTML

Papers citing "Improving Low Compute Language Modeling with In-Domain Embedding Initialisation"

5 / 5 papers shown
Title
Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for
  Fast, Memory Efficient, and Long Context Finetuning and Inference
Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference
Benjamin Warner
Antoine Chaffin
Benjamin Clavié
Orion Weller
Oskar Hallström
...
Tom Aarsen
Nathan Cooper
Griffin Adams
Jeremy Howard
Iacopo Poli
90
79
0
18 Dec 2024
Efficient and Effective Vocabulary Expansion Towards Multilingual Large
  Language Models
Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models
Seungduk Kim
Seungtaek Choi
Myeongho Jeong
41
6
0
22 Feb 2024
Introducing DictaLM -- A Large Generative Language Model for Modern
  Hebrew
Introducing DictaLM -- A Large Generative Language Model for Modern Hebrew
Shaltiel Shmidman
Avi Shmidman
Amir DN Cohen
Moshe Koppel
33
0
0
25 Sep 2023
Compositional Demographic Word Embeddings
Compositional Demographic Word Embeddings
Charles F Welch
Jonathan K. Kummerfeld
Verónica Pérez-Rosas
Rada Mihalcea
21
31
0
06 Oct 2020
Stanza: A Python Natural Language Processing Toolkit for Many Human
  Languages
Stanza: A Python Natural Language Processing Toolkit for Many Human Languages
Peng Qi
Yuhao Zhang
Yuhui Zhang
Jason Bolton
Christopher D. Manning
AI4TS
213
1,656
0
16 Mar 2020
1