Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2010.07835
Cited By
Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach
15 October 2020
Yue Yu
Simiao Zuo
Haoming Jiang
Wendi Ren
T. Zhao
Chao Zhang
AI4MH
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach"
50 / 50 papers shown
Title
Knowledge-Infused Prompting: Assessing and Advancing Clinical Text Data Generation with Large Language Models
Ran Xu
Hejie Cui
Yue Yu
Xuan Kan
Wenqi Shi
Yuchen Zhuang
Wei Jin
Joyce C. Ho
Carl Yang
111
16
0
28 Jan 2025
An Adaptive Method for Weak Supervision with Drifting Data
Alessio Mazzetto
Reza Esfandiarpoor
E. Upfal
Stephen H. Bach
Stephen H. Bach
89
1
0
02 Jun 2023
PRBoost: Prompt-Based Rule Discovery and Boosting for Interactive Weakly-Supervised Learning
Rongzhi Zhang
Yue Yu
Pranav Shetty
Le Song
Chao Zhang
68
24
0
18 Mar 2022
Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning
Beliz Gunel
Jingfei Du
Alexis Conneau
Ves Stoyanov
56
504
0
03 Nov 2020
Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data
Lingkai Kong
Haoming Jiang
Yuchen Zhuang
Jie Lyu
T. Zhao
Chao Zhang
OODD
47
26
0
22 Oct 2020
Text Classification Using Label Names Only: A Language Model Self-Training Approach
Yu Meng
Yunyi Zhang
Jiaxin Huang
Chenyan Xiong
Heng Ji
Chao Zhang
Jiawei Han
VLM
62
76
0
14 Oct 2020
Denoising Multi-Source Weak Supervision for Neural Text Classification
Wendi Ren
Yinghao Li
Hanting Su
David Kartchner
Cassie S. Mitchell
Chao Zhang
NoLa
60
70
0
09 Oct 2020
InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Wei Ping
Shuohang Wang
Yu Cheng
Zhe Gan
R. Jia
Yue Liu
Jingjing Liu
AAML
145
116
0
05 Oct 2020
SeqMix: Augmenting Active Sequence Labeling via Sequence Mixup
Rongzhi Zhang
Yue Yu
Chao Zhang
VLM
40
94
0
05 Oct 2020
Better Fine-Tuning by Reducing Representational Collapse
Armen Aghajanyan
Akshat Shrivastava
Anchit Gupta
Naman Goyal
Luke Zettlemoyer
S. Gupta
AAML
65
209
0
06 Aug 2020
BOND: BERT-Assisted Open-Domain Named Entity Recognition with Distant Supervision
Chen Liang
Yue Yu
Haoming Jiang
Siawpeng Er
Ruijia Wang
T. Zhao
Chao Zhang
OffRL
45
237
0
28 Jun 2020
Uncertainty-aware Self-training for Text Classification with Few Labels
Subhabrata Mukherjee
Ahmed Hassan Awadallah
UQLM
30
43
0
27 Jun 2020
Revisiting Few-sample BERT Fine-tuning
Tianyi Zhang
Felix Wu
Arzoo Katiyar
Kilian Q. Weinberger
Yoav Artzi
125
445
0
10 Jun 2020
A Self-Training Method for Machine Reading Comprehension with Soft Evidence Extraction
Yilin Niu
Fangkai Jiao
Mantong Zhou
Ting Yao
Jingfang Xu
Minlie Huang
SyDa
37
33
0
11 May 2020
Named Entity Recognition without Labelled Data: A Weak Supervision Approach
Pierre Lison
A. Hubin
Jeremy Barnes
Samia Touileb
40
112
0
30 Apr 2020
Masking as an Efficient Alternative to Finetuning for Pretrained Language Models
Mengjie Zhao
Tao R. Lin
Fei Mi
Martin Jaggi
Hinrich Schütze
46
119
0
26 Apr 2020
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan
Ana Marasović
Swabha Swayamdipta
Kyle Lo
Iz Beltagy
Doug Downey
Noah A. Smith
VLM
AI4CE
CLL
126
2,398
0
23 Apr 2020
Learning from Rules Generalizing Labeled Exemplars
Abhijeet Awasthi
Sabyasachi Ghosh
Rasna Goyal
Sunita Sarawagi
48
86
0
13 Apr 2020
Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation
Yige Xu
Xipeng Qiu
L. Zhou
Xuanjing Huang
71
67
0
24 Feb 2020
Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping
Jesse Dodge
Gabriel Ilharco
Roy Schwartz
Ali Farhadi
Hannaneh Hajishirzi
Noah A. Smith
93
594
0
15 Feb 2020
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Haoming Jiang
Pengcheng He
Weizhu Chen
Xiaodong Liu
Jianfeng Gao
T. Zhao
65
560
0
08 Nov 2019
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
335
19,824
0
23 Oct 2019
Revisiting Self-Training for Neural Sequence Generation
Junxian He
Jiatao Gu
Jiajun Shen
MarcÁurelio Ranzato
SSL
LRM
273
272
0
30 Sep 2019
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
Chen Zhu
Yu Cheng
Zhe Gan
S. Sun
Tom Goldstein
Jingjing Liu
AAML
255
440
0
25 Sep 2019
NERO: A Neural Rule Grounding Framework for Label-Efficient Relation Extraction
Wenxuan Zhou
Hongtao Lin
Bill Yuchen Lin
Ziqi Wang
Junyi Du
Leonardo Neves
Xiang Ren
NAI
63
54
0
05 Sep 2019
Learning with Noisy Labels for Sentence-level Sentiment Classification
Hao Wang
Bing-Quan Liu
Chaozhuo Li
Yan Yang
Tianrui Li
NoLa
46
26
0
31 Aug 2019
SenseBERT: Driving Some Sense into BERT
Yoav Levine
Barak Lenz
Or Dagan
Ori Ram
Dan Padnos
Or Sharir
Shai Shalev-Shwartz
Amnon Shashua
Y. Shoham
SSL
49
186
0
15 Aug 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
467
24,160
0
26 Jul 2019
Putting words in context: LSTM language models and lexical ambiguity
Laura Aina
Kristina Gulordava
Gemma Boleda
37
40
0
12 Jun 2019
Enriching Pre-trained Language Model with Entity Information for Relation Classification
Shanchan Wu
Yifan He
28
406
0
20 May 2019
SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
Alex Jinpeng Wang
Yada Pruksachatkun
Nikita Nangia
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
200
2,296
0
02 May 2019
Unsupervised Data Augmentation for Consistency Training
Qizhe Xie
Zihang Dai
Eduard H. Hovy
Minh-Thang Luong
Quoc V. Le
114
2,306
0
29 Apr 2019
SciBERT: A Pretrained Language Model for Scientific Text
Iz Beltagy
Kyle Lo
Arman Cohan
92
2,948
0
26 Mar 2019
To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks
Matthew E. Peters
Sebastian Ruder
Noah A. Smith
71
435
0
14 Mar 2019
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
Jinhyuk Lee
Wonjin Yoon
Sungdong Kim
Donghyeon Kim
Sunkyu Kim
Chan Ho So
Jaewoo Kang
OOD
127
5,579
0
25 Jan 2019
Bootstrapping Conversational Agents With Weak Supervision
Neil Rohit Mallinar
Abhishek Shah
Rajendra Ugrani
Ayush Gupta
Manikandan Gurusankar
...
Yunfeng Zhang
Rachel K. E. Bellamy
Robert Yates
C. Desmarais
Blake McGregor
36
20
0
14 Dec 2018
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.2K
93,936
0
11 Oct 2018
Weakly-Supervised Neural Text Classification
Yu Meng
Jiaming Shen
Chao Zhang
Jiawei Han
60
188
0
02 Sep 2018
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
Mohammad Taher Pilehvar
Jose Camacho-Collados
142
478
0
28 Aug 2018
Deep contextualized word representations
Matthew E. Peters
Mark Neumann
Mohit Iyyer
Matt Gardner
Christopher Clark
Kenton Lee
Luke Zettlemoyer
NAI
141
11,520
0
15 Feb 2018
Snorkel: Rapid Training Data Creation with Weak Supervision
Alexander Ratner
Stephen H. Bach
Henry R. Ehrenberg
Jason Alan Fries
Sen Wu
Christopher Ré
69
1,021
0
28 Nov 2017
mixup: Beyond Empirical Risk Minimization
Hongyi Zhang
Moustapha Cissé
Yann N. Dauphin
David Lopez-Paz
NoLa
258
9,687
0
25 Oct 2017
Learning how to Active Learn: A Deep Reinforcement Learning Approach
Meng Fang
Yuan Li
Trevor Cohn
46
284
0
08 Aug 2017
Learning with Noise: Enhance Distantly Supervised Relation Extraction with Dynamic Transition Matrix
Bingfeng Luo
Yansong Feng
Zheng Wang
Zhanxing Zhu
Songfang Huang
Rui Yan
Dongyan Zhao
55
86
0
11 May 2017
Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
Takeru Miyato
S. Maeda
Masanori Koyama
S. Ishii
GAN
133
2,728
0
13 Apr 2017
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
Antti Tarvainen
Harri Valpola
OOD
MoMe
313
1,275
0
06 Mar 2017
Regularizing Neural Networks by Penalizing Confident Output Distributions
Gabriel Pereyra
George Tucker
J. Chorowski
Lukasz Kaiser
Geoffrey E. Hinton
NoLa
126
1,133
0
23 Jan 2017
Unsupervised Deep Embedding for Clustering Analysis
Junyuan Xie
Ross B. Girshick
Ali Farhadi
SSL
75
2,855
0
19 Nov 2015
Character-level Convolutional Networks for Text Classification
Xiang Zhang
Jiaqi Zhao
Yann LeCun
217
6,077
0
04 Sep 2015
Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference
Y. Gal
Zoubin Ghahramani
UQCV
BDL
252
747
0
06 Jun 2015
1