Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2401.17514
Cited By
v1
v2
v3 (latest)
How Useful is Continued Pre-Training for Generative Unsupervised Domain Adaptation?
31 January 2024
Rheeya Uppaal
Yixuan Li
Junjie Hu
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"How Useful is Continued Pre-Training for Generative Unsupervised Domain Adaptation?"
50 / 62 papers shown
Title
Process Reward Model with Q-Value Rankings
W. Li
Yixuan Li
LRM
124
23
0
15 Oct 2024
Exploring Gen-AI applications in building research and industry: A review
Hanlong Wan
Jian Zhang
Yan Chen
Weili Xu
Fan Feng
AI4CE
75
2
0
01 Oct 2024
MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents
Liyan Tang
Philippe Laban
Greg Durrett
HILM
SyDa
67
95
0
16 Apr 2024
Evolving Domain Adaptation of Pretrained Language Models for Text Classification
Yun-Shiuan Chuang
Yi Wu
Dhruv Gupta
Rheeya Uppaal
Ananya Kumar
Luhang Sun
Makesh Narsimhan Sreedhar
Sijia Yang
Timothy T. Rogers
Junjie Hu
VLM
90
4
0
16 Nov 2023
Instruction Tuning for Large Language Models: A Survey
Shengyu Zhang
Linfeng Dong
Xiaoya Li
Sen Zhang
Xiaofei Sun
...
Jiwei Li
Runyi Hu
Tianwei Zhang
Leilei Gan
Guoyin Wang
LM&MA
83
597
0
21 Aug 2023
Cross-Lingual Transfer with Target Language-Ready Task Adapters
Marinela Parović
Alan Ansell
Ivan Vulić
Anna Korhonen
81
11
0
05 Jun 2023
Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection
Rheeya Uppaal
Junjie Hu
Yixuan Li
OODD
182
35
0
22 May 2023
TADA: Efficient Task-Agnostic Domain Adaptation for Transformers
Chia-Chien Hung
Lukas Lange
Jannik Strötgen
72
10
0
22 May 2023
Multilingual Machine Translation with Large Language Models: Empirical Results and Analysis
Wenhao Zhu
Hongyi Liu
Qingxiu Dong
Jingjing Xu
Shujian Huang
Lingpeng Kong
Jiajun Chen
Lei Li
LRM
81
149
0
10 Apr 2023
LLaMA: Open and Efficient Foundation Language Models
Hugo Touvron
Thibaut Lavril
Gautier Izacard
Xavier Martinet
Marie-Anne Lachaux
...
Faisal Azhar
Aurelien Rodriguez
Armand Joulin
Edouard Grave
Guillaume Lample
ALM
PILM
1.5K
13,247
0
27 Feb 2023
UDApter -- Efficient Domain Adaptation Using Adapters
Bhavitvya Malik
Abhinav Ramesh Kashyap
MingSung Kan
Soujanya Poria
56
16
0
07 Feb 2023
Editing Models with Task Arithmetic
Gabriel Ilharco
Marco Tulio Ribeiro
Mitchell Wortsman
Suchin Gururangan
Ludwig Schmidt
Hannaneh Hajishirzi
Ali Farhadi
KELM
MoMe
MU
185
496
0
08 Dec 2022
On the Effectiveness of Parameter-Efficient Fine-Tuning
Z. Fu
Haoran Yang
Anthony Man-Cho So
Wai Lam
Lidong Bing
Nigel Collier
68
158
0
28 Nov 2022
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BigScience Workshop
:
Teven Le Scao
Angela Fan
Christopher Akiki
...
Zhongli Xie
Zifan Ye
M. Bras
Younes Belkada
Thomas Wolf
VLM
386
2,388
0
09 Nov 2022
Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation
Xu Guo
Boyang Albert Li
Han Yu
VLM
83
24
0
06 Oct 2022
Domain Confused Contrastive Learning for Unsupervised Domain Adaptation
Quanyu Long
Tianze Luo
Wenya Wang
Sinno Jialin Pan
84
8
0
10 Jul 2022
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning
Haokun Liu
Derek Tam
Mohammed Muqeeth
Jay Mohta
Tenghao Huang
Joey Tianyi Zhou
Colin Raffel
104
910
0
11 May 2022
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks
Yizhong Wang
Swaroop Mishra
Pegah Alipoormolabashi
Yeganeh Kordi
Amirreza Mirzaei
...
Chitta Baral
Yejin Choi
Noah A. Smith
Hannaneh Hajishirzi
Daniel Khashabi
ELM
123
846
0
16 Apr 2022
Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation
Kendrick Shen
Robbie Jones
Ananya Kumar
Sang Michael Xie
Jeff Z. HaoChen
Tengyu Ma
Percy Liang
SSL
68
88
0
01 Apr 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
874
12,973
0
04 Mar 2022
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Ananya Kumar
Aditi Raghunathan
Robbie Jones
Tengyu Ma
Percy Liang
OODD
120
674
0
21 Feb 2022
Should You Mask 15% in Masked Language Modeling?
Alexander Wettig
Tianyu Gao
Zexuan Zhong
Danqi Chen
CVBM
64
166
0
16 Feb 2022
PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
Stephen H. Bach
Victor Sanh
Zheng-Xin Yong
Albert Webson
Colin Raffel
...
Khalid Almubarak
Xiangru Tang
Dragomir R. Radev
Mike Tian-Jian Jiang
Alexander M. Rush
VLM
322
348
0
02 Feb 2022
Unsupervised Domain Adaptation with Adapter
Rongsheng Zhang
Yinhe Zheng
Xiaoxi Mao
Minlie Huang
35
18
0
01 Nov 2021
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
342
1,702
0
15 Oct 2021
Towards a Unified View of Parameter-Efficient Transfer Learning
Junxian He
Chunting Zhou
Xuezhe Ma
Taylor Berg-Kirkpatrick
Graham Neubig
AAML
129
937
0
08 Oct 2021
Finetuned Language Models Are Zero-Shot Learners
Jason W. Wei
Maarten Bosma
Vincent Zhao
Kelvin Guu
Adams Wei Yu
Brian Lester
Nan Du
Andrew M. Dai
Quoc V. Le
ALM
UQCV
201
3,750
0
03 Sep 2021
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLM
SyDa
213
3,977
0
28 Jul 2021
LoRA: Low-Rank Adaptation of Large Language Models
J. E. Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Lu Wang
Weizhu Chen
OffRL
AI4TS
AI4CE
ALM
AIMat
471
10,367
0
17 Jun 2021
UDALM: Unsupervised Domain Adaptation through Language Modeling
Constantinos F. Karouzos
Georgios Paraskevopoulos
Alexandros Potamianos
51
57
0
14 Apr 2021
Long Document Summarization in a Low Resource Setting using Pretrained Language Models
Ahsaas Bajaj
Pavitra Dangati
Kalpesh Krishna
Pradhiksha Ashok Kumar
Rheeya Uppaal
Bradford T. Windsor
Eliot Brenner
Dominic Dotterrer
Rajarshi Das
Andrew McCallum
AILaw
RALM
55
52
0
01 Mar 2021
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Xiang Lisa Li
Percy Liang
244
4,261
0
01 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
391
1,967
0
31 Dec 2020
Trust Issues: Uncertainty Estimation Does Not Enable Reliable OOD Detection On Medical Tabular Data
Dennis Ulmer
L. Meijerink
Giovanni Cina
OOD
46
70
0
06 Nov 2020
Domain Divergences: a Survey and Empirical Analysis
Abhinav Ramesh Kashyap
Devamanyu Hazarika
Min-Yen Kan
Roger Zimmermann
229
40
0
23 Oct 2020
PMI-Masking: Principled masking of correlated spans
Yoav Levine
Barak Lenz
Opher Lieber
Omri Abend
Kevin Leyton-Brown
Moshe Tennenholtz
Y. Shoham
56
73
0
05 Oct 2020
PERL: Pivot-based Domain Adaptation for Pre-trained Deep Contextualized Embedding Models
Eyal Ben-David
Carmel Rabinovitz
Roi Reichart
SSL
95
62
0
16 Jun 2020
Neural Unsupervised Domain Adaptation in NLP---A Survey
Alan Ramponi
Barbara Plank
OOD
102
259
0
31 May 2020
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
795
42,055
0
28 May 2020
Beyond Accuracy: Behavioral Testing of NLP models with CheckList
Marco Tulio Ribeiro
Tongshuang Wu
Carlos Guestrin
Sameer Singh
ELM
208
1,104
0
08 May 2020
MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer
Jonas Pfeiffer
Ivan Vulić
Iryna Gurevych
Sebastian Ruder
103
627
0
30 Apr 2020
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan
Ana Marasović
Swabha Swayamdipta
Kyle Lo
Iz Beltagy
Doug Downey
Noah A. Smith
VLM
AI4CE
CLL
152
2,428
0
23 Apr 2020
Train No Evil: Selective Masking for Task-Guided Pre-Training
Yuxian Gu
Zhengyan Zhang
Xiaozhi Wang
Zhiyuan Liu
Maosong Sun
110
59
0
21 Apr 2020
Multi-Source Domain Adaptation for Text Classification via DistanceNet-Bandits
Han Guo
Ramakanth Pasunuru
Joey Tianyi Zhou
61
114
0
13 Jan 2020
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
439
20,181
0
23 Oct 2019
Unsupervised Domain Adaptation through Self-Supervision
Yu Sun
Eric Tzeng
Trevor Darrell
Alexei A. Efros
OOD
SSL
62
238
0
26 Sep 2019
MASS: Masked Sequence to Sequence Pre-training for Language Generation
Kaitao Song
Xu Tan
Tao Qin
Jianfeng Lu
Tie-Yan Liu
115
965
0
07 May 2019
Parameter-Efficient Transfer Learning for NLP
N. Houlsby
A. Giurgiu
Stanislaw Jastrzebski
Bruna Morrone
Quentin de Laroussilhe
Andrea Gesmundo
Mona Attariyan
Sylvain Gelly
210
4,460
0
02 Feb 2019
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
Jinhyuk Lee
Wonjin Yoon
Sungdong Kim
Donghyeon Kim
Sunkyu Kim
Chan Ho So
Jaewoo Kang
OOD
156
5,659
0
25 Jan 2019
A Survey of Unsupervised Deep Domain Adaptation
Garrett Wilson
D. Cook
OOD
112
816
0
06 Dec 2018
1
2
Next