ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.12943
  4. Cited By
Towards Simple and Efficient Task-Adaptive Pre-training for Text
  Classification

Towards Simple and Efficient Task-Adaptive Pre-training for Text Classification

26 September 2022
Arnav Ladkat
Aamir Miyajiwala
Samiksha Jagadale
Rekha Kulkarni
Raviraj Joshi
    VLM
ArXivPDFHTML

Papers citing "Towards Simple and Efficient Task-Adaptive Pre-training for Text Classification"

11 / 11 papers shown
Title
L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT
  Language Models, and Resources
L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources
Raviraj Joshi
68
54
0
02 Feb 2022
Comparative Study of Long Document Classification
Comparative Study of Long Document Classification
Vedangi Wagh
Snehal Khandve
Isha Joshi
Apurva Wani
Geetanjali Kale
Raviraj Joshi
71
27
0
01 Nov 2021
Task-adaptive Pre-training of Language Models with Word Embedding
  Regularization
Task-adaptive Pre-training of Language Models with Word Embedding Regularization
Kosuke Nishida
Kyosuke Nishida
Sen Yoshida
VLM
66
8
0
17 Sep 2021
Efficient Domain Adaptation of Language Models via Adaptive Tokenization
Efficient Domain Adaptation of Language Models via Adaptive Tokenization
Vin Sachidananda
Jason S Kessler
Yi-An Lai
45
36
0
15 Sep 2021
Task-adaptive Pre-training and Self-training are Complementary for
  Natural Language Understanding
Task-adaptive Pre-training and Self-training are Complementary for Natural Language Understanding
Shiyang Li
Semih Yavuz
Wenhu Chen
Xifeng Yan
36
12
0
14 Sep 2021
Evaluating Deep Learning Approaches for Covid19 Fake News Detection
Evaluating Deep Learning Approaches for Covid19 Fake News Detection
Apurva Wani
Isha Joshi
Snehal Khandve
Vedangi Wagh
Raviraj Joshi
GNN
126
111
0
11 Jan 2021
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan
Ana Marasović
Swabha Swayamdipta
Kyle Lo
Iz Beltagy
Doug Downey
Noah A. Smith
VLM
AI4CE
CLL
152
2,424
0
23 Apr 2020
Linguistic Knowledge and Transferability of Contextual Representations
Linguistic Knowledge and Transferability of Contextual Representations
Nelson F. Liu
Matt Gardner
Yonatan Belinkov
Matthew E. Peters
Noah A. Smith
119
731
0
21 Mar 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.7K
94,770
0
11 Oct 2018
Deep contextualized word representations
Deep contextualized word representations
Matthew E. Peters
Mark Neumann
Mohit Iyyer
Matt Gardner
Christopher Clark
Kenton Lee
Luke Zettlemoyer
NAI
206
11,549
0
15 Feb 2018
Learned in Translation: Contextualized Word Vectors
Learned in Translation: Contextualized Word Vectors
Bryan McCann
James Bradbury
Caiming Xiong
R. Socher
115
909
0
01 Aug 2017
1