ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.08671
  4. Cited By
To Pretrain or Not to Pretrain: Examining the Benefits of Pretraining on
  Resource Rich Tasks

To Pretrain or Not to Pretrain: Examining the Benefits of Pretraining on Resource Rich Tasks

15 June 2020
Sinong Wang
Madian Khabsa
Hao Ma
ArXivPDFHTML

Papers citing "To Pretrain or Not to Pretrain: Examining the Benefits of Pretraining on Resource Rich Tasks"

6 / 6 papers shown
Title
On the Role of Pre-trained Embeddings in Binary Code Analysis
On the Role of Pre-trained Embeddings in Binary Code Analysis
Alwin Maier
Felix Weissberg
Konrad Rieck
105
0
0
12 Feb 2025
SpanBERT: Improving Pre-training by Representing and Predicting Spans
SpanBERT: Improving Pre-training by Representing and Predicting Spans
Mandar Joshi
Danqi Chen
Yinhan Liu
Daniel S. Weld
Luke Zettlemoyer
Omer Levy
130
1,962
0
24 Jul 2019
Rethinking ImageNet Pre-training
Rethinking ImageNet Pre-training
Kaiming He
Ross B. Girshick
Piotr Dollár
VLM
SSeg
123
1,083
0
21 Nov 2018
Deep contextualized word representations
Deep contextualized word representations
Matthew E. Peters
Mark Neumann
Mohit Iyyer
Matt Gardner
Christopher Clark
Kenton Lee
Luke Zettlemoyer
NAI
182
11,542
0
15 Feb 2018
Revisiting Unreasonable Effectiveness of Data in Deep Learning Era
Revisiting Unreasonable Effectiveness of Data in Deep Learning Era
Chen Sun
Abhinav Shrivastava
Saurabh Singh
Abhinav Gupta
VLM
162
2,393
0
10 Jul 2017
Neural Machine Translation of Rare Words with Subword Units
Neural Machine Translation of Rare Words with Subword Units
Rico Sennrich
Barry Haddow
Alexandra Birch
195
7,729
0
31 Aug 2015
1