ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.10457
  4. Cited By
Improving the Reusability of Pre-trained Language Models in Real-world
  Applications

Improving the Reusability of Pre-trained Language Models in Real-world Applications

19 July 2023
Somayeh Ghanbarzadeh
Hamid Palangi
Yan-ping Huang
R. C. Moreno
Hamed Khanpour
ArXivPDFHTML

Papers citing "Improving the Reusability of Pre-trained Language Models in Real-world Applications"

11 / 11 papers shown
Title
Towards Improving Adversarial Training of NLP Models
Towards Improving Adversarial Training of NLP Models
Jin Yong Yoo
Yanjun Qi
AAML
124
125
0
01 Sep 2021
An Empirical Study on Robustness to Spurious Correlations using
  Pre-trained Language Models
An Empirical Study on Robustness to Spurious Correlations using Pre-trained Language Models
Lifu Tu
Garima Lalwani
Spandana Gella
He He
LRM
67
186
0
14 Jul 2020
Pretrained Transformers Improve Out-of-Distribution Robustness
Pretrained Transformers Improve Out-of-Distribution Robustness
Dan Hendrycks
Xiaoyuan Liu
Eric Wallace
Adam Dziedzic
R. Krishnan
D. Song
OOD
145
430
0
13 Apr 2020
Adversarial NLI: A New Benchmark for Natural Language Understanding
Adversarial NLI: A New Benchmark for Natural Language Understanding
Yixin Nie
Adina Williams
Emily Dinan
Joey Tianyi Zhou
Jason Weston
Douwe Kiela
101
991
0
31 Oct 2019
Learning the Difference that Makes a Difference with
  Counterfactually-Augmented Data
Learning the Difference that Makes a Difference with Counterfactually-Augmented Data
Divyansh Kaushik
Eduard H. Hovy
Zachary Chase Lipton
CML
68
567
0
26 Sep 2019
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known
  Dataset Biases
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases
Christopher Clark
Mark Yatskar
Luke Zettlemoyer
OOD
69
463
0
09 Sep 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
475
24,160
0
26 Jul 2019
Publicly Available Clinical BERT Embeddings
Publicly Available Clinical BERT Embeddings
Emily Alsentzer
John R. Murphy
Willie Boag
W. Weng
Di Jin
Tristan Naumann
Matthew B. A. McDermott
AI4MH
132
1,956
0
06 Apr 2019
PAWS: Paraphrase Adversaries from Word Scrambling
PAWS: Paraphrase Adversaries from Word Scrambling
Yuan Zhang
Jason Baldridge
Luheng He
68
537
0
01 Apr 2019
BioBERT: a pre-trained biomedical language representation model for
  biomedical text mining
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
Jinhyuk Lee
Wonjin Yoon
Sungdong Kim
Donghyeon Kim
Sunkyu Kim
Chan Ho So
Jaewoo Kang
OOD
127
5,579
0
25 Jan 2019
A Broad-Coverage Challenge Corpus for Sentence Understanding through
  Inference
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
Adina Williams
Nikita Nangia
Samuel R. Bowman
464
4,444
0
18 Apr 2017
1