ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.10234
  4. Cited By
BERTweetFR : Domain Adaptation of Pre-Trained Language Models for French
  Tweets

BERTweetFR : Domain Adaptation of Pre-Trained Language Models for French Tweets

21 September 2021
Yanzhu Guo
Virgile Rennard
Christos Xypolopoulos
Michalis Vazirgiannis
    VLMAI4CE
ArXiv (abs)PDFHTML

Papers citing "BERTweetFR : Domain Adaptation of Pre-Trained Language Models for French Tweets"

8 / 8 papers shown
Title
BERTweet: A pre-trained language model for English Tweets
BERTweet: A pre-trained language model for English Tweets
Dat Quoc Nguyen
Thanh Tien Vu
A. Nguyen
VLM
99
919
0
20 May 2020
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan
Ana Marasović
Swabha Swayamdipta
Kyle Lo
Iz Beltagy
Doug Downey
Noah A. Smith
VLMAI4CECLL
164
2,435
0
23 Apr 2020
FlauBERT: Unsupervised Language Model Pre-training for French
FlauBERT: Unsupervised Language Model Pre-training for French
Hang Le
Loïc Vial
Jibril Frej
Vincent Segonne
Maximin Coavoux
Benjamin Lecouteux
A. Allauzen
Benoît Crabbé
Laurent Besacier
D. Schwab
AI4CE
88
400
0
11 Dec 2019
CamemBERT: a Tasty French Language Model
CamemBERT: a Tasty French Language Model
Louis Martin
Benjamin Muller
Pedro Ortiz Suarez
Yoann Dupont
Laurent Romary
Eric Villemonte de la Clergerie
Djamé Seddah
Benoît Sagot
117
975
0
10 Nov 2019
SentencePiece: A simple and language independent subword tokenizer and
  detokenizer for Neural Text Processing
SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing
Taku Kudo
John Richardson
204
3,528
0
19 Aug 2018
Subword Regularization: Improving Neural Network Translation Models with
  Multiple Subword Candidates
Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates
Taku Kudo
226
1,173
0
29 Apr 2018
Unsupervised Pretraining for Sequence to Sequence Learning
Unsupervised Pretraining for Sequence to Sequence Learning
Prajit Ramachandran
Peter J. Liu
Quoc V. Le
SSLAIMat
102
282
0
08 Nov 2016
Distributed Representations of Words and Phrases and their
  Compositionality
Distributed Representations of Words and Phrases and their Compositionality
Tomas Mikolov
Ilya Sutskever
Kai Chen
G. Corrado
J. Dean
NAIOCL
402
33,560
0
16 Oct 2013
1