ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.02504
  4. Cited By
A Tutorial on the Pretrain-Finetune Paradigm for Natural Language
  Processing

A Tutorial on the Pretrain-Finetune Paradigm for Natural Language Processing

4 March 2024
Yu Wang
Wen Qu
ArXivPDFHTML

Papers citing "A Tutorial on the Pretrain-Finetune Paradigm for Natural Language Processing"

10 / 10 papers shown
Title
Navigating Prompt Complexity for Zero-Shot Classification: A Study of
  Large Language Models in Computational Social Science
Navigating Prompt Complexity for Zero-Shot Classification: A Study of Large Language Models in Computational Social Science
Yida Mu
Benze Wu
William Thorne
Ambrose Robinson
Nikolaos Aletras
Carolina Scarton
Kalina Bontcheva
Xingyi Song
38
18
0
23 May 2023
Can Large Language Models Transform Computational Social Science?
Can Large Language Models Transform Computational Social Science?
Caleb Ziems
William B. Held
Omar Shaikh
Jiaao Chen
Zhehao Zhang
Diyi Yang
LLMAG
75
308
0
12 Apr 2023
Can ChatGPT Understand Too? A Comparative Study on ChatGPT and
  Fine-tuned BERT
Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
AI4MH
96
244
0
19 Feb 2023
How Effective is Task-Agnostic Data Augmentation for Pretrained
  Transformers?
How Effective is Task-Agnostic Data Augmentation for Pretrained Transformers?
Shayne Longpre
Yu Wang
Christopher DuBois
ViT
61
84
0
05 Oct 2020
Measuring Emotions in the COVID-19 Real World Worry Dataset
Measuring Emotions in the COVID-19 Real World Worry Dataset
Bennett Kleinberg
Isabelle van der Vegt
Maximilian Mozes
31
158
0
08 Apr 2020
ALBERT: A Lite BERT for Self-supervised Learning of Language
  Representations
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Zhenzhong Lan
Mingda Chen
Sebastian Goodman
Kevin Gimpel
Piyush Sharma
Radu Soricut
SSL
AIMat
363
6,449
0
26 Sep 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
644
24,431
0
26 Jul 2019
Large Batch Optimization for Deep Learning: Training BERT in 76 minutes
Large Batch Optimization for Deep Learning: Training BERT in 76 minutes
Yang You
Jing Li
Sashank J. Reddi
Jonathan Hseu
Sanjiv Kumar
Srinadh Bhojanapalli
Xiaodan Song
J. Demmel
Kurt Keutzer
Cho-Jui Hsieh
ODL
230
996
0
01 Apr 2019
Don't Decay the Learning Rate, Increase the Batch Size
Don't Decay the Learning Rate, Increase the Batch Size
Samuel L. Smith
Pieter-Jan Kindermans
Chris Ying
Quoc V. Le
ODL
99
995
0
01 Nov 2017
Distributed Representations of Words and Phrases and their
  Compositionality
Distributed Representations of Words and Phrases and their Compositionality
Tomas Mikolov
Ilya Sutskever
Kai Chen
G. Corrado
J. Dean
NAI
OCL
394
33,529
0
16 Oct 2013
1