ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.16926
  4. Cited By
Domain Adaptation for Sparse-Data Settings: What Do We Gain by Not Using
  Bert?

Domain Adaptation for Sparse-Data Settings: What Do We Gain by Not Using Bert?

31 March 2022
Marina Sedinkina
Martin Schmitt
Hinrich Schutze
ArXivPDFHTML

Papers citing "Domain Adaptation for Sparse-Data Settings: What Do We Gain by Not Using Bert?"

4 / 4 papers shown
Title
An Effective Deployment of Diffusion LM for Data Augmentation in
  Low-Resource Sentiment Classification
An Effective Deployment of Diffusion LM for Data Augmentation in Low-Resource Sentiment Classification
Zhuowei Chen
Lianxi Wang
Yuben Wu
Xinfeng Liao
Yujia Tian
Junyang Zhong
DiffM
32
0
0
05 Sep 2024
I-BERT: Integer-only BERT Quantization
I-BERT: Integer-only BERT Quantization
Sehoon Kim
A. Gholami
Z. Yao
Michael W. Mahoney
Kurt Keutzer
MQ
107
344
0
05 Jan 2021
Convolutional Neural Networks for Sentence Classification
Convolutional Neural Networks for Sentence Classification
Yoon Kim
AILaw
VLM
279
13,373
0
25 Aug 2014
Efficient Estimation of Word Representations in Vector Space
Efficient Estimation of Word Representations in Vector Space
Tomáš Mikolov
Kai Chen
G. Corrado
J. Dean
3DV
302
31,280
0
16 Jan 2013
1