ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.11680
  4. Cited By
Smooth Sailing: Improving Active Learning for Pre-trained Language
  Models with Representation Smoothness Analysis

Smooth Sailing: Improving Active Learning for Pre-trained Language Models with Representation Smoothness Analysis

20 December 2022
Josip Jukić
Jan Snajder
ArXivPDFHTML

Papers citing "Smooth Sailing: Improving Active Learning for Pre-trained Language Models with Representation Smoothness Analysis"

34 / 34 papers shown
Title
Measures of Information Reflect Memorization Patterns
Measures of Information Reflect Memorization Patterns
Rachit Bansal
Danish Pruthi
Yonatan Belinkov
50
9
0
17 Oct 2022
Active Learning by Acquiring Contrastive Examples
Active Learning by Acquiring Contrastive Examples
Katerina Margatina
Giorgos Vernikos
Loïc Barrault
Nikolaos Aletras
25
186
0
08 Sep 2021
Revisiting Uncertainty-based Query Strategies for Active Learning with
  Transformers
Revisiting Uncertainty-based Query Strategies for Active Learning with Transformers
Christopher Schröder
A. Niekler
Martin Potthast
38
78
0
12 Jul 2021
Mind Your Outliers! Investigating the Negative Impact of Outliers on
  Active Learning for Visual Question Answering
Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for Visual Question Answering
Siddharth Karamcheti
Ranjay Krishna
Li Fei-Fei
Christopher D. Manning
49
91
0
06 Jul 2021
Deep Learning Through the Lens of Example Difficulty
Deep Learning Through the Lens of Example Difficulty
R. Baldock
Hartmut Maennel
Behnam Neyshabur
67
159
0
17 Jun 2021
On the geometry of generalization and memorization in deep neural
  networks
On the geometry of generalization and memorization in deep neural networks
Cory Stephenson
Suchismita Padhy
Abhinav Ganesh
Yue Hui
Hanlin Tang
SueYeon Chung
TDI
AI4CE
49
73
0
30 May 2021
On the Importance of Effectively Adapting Pretrained Language Models for
  Active Learning
On the Importance of Effectively Adapting Pretrained Language Models for Active Learning
Katerina Margatina
Loïc Barrault
Nikolaos Aletras
41
37
0
16 Apr 2021
Stopping Criterion for Active Learning Based on Error Stability
Stopping Criterion for Active Learning Based on Error Stability
Hideaki Ishibashi
H. Hino
18
11
0
05 Apr 2021
Active Learning for Sequence Tagging with Deep Pre-trained Models and
  Bayesian Uncertainty Estimates
Active Learning for Sequence Tagging with Deep Pre-trained Models and Bayesian Uncertainty Estimates
Artem Shelmanov
Dmitri Puzyrev
L. Kupriyanova
D. Belyakov
Daniil Larionov
Nikita Khromov
Olga Kozlova
Ekaterina Artemova
Dmitry V. Dylov
Alexander Panchenko
BDL
UQLM
UQCV
33
53
0
20 Jan 2021
Fine-tuning BERT for Low-Resource Natural Language Understanding via
  Active Learning
Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning
Daniel Grießhaber
J. Maucher
Ngoc Thang Vu
44
46
0
04 Dec 2020
Cold-start Active Learning through Self-supervised Language Modeling
Cold-start Active Learning through Self-supervised Language Modeling
Michelle Yuan
Hsuan-Tien Lin
Jordan L. Boyd-Graber
155
183
0
19 Oct 2020
Revisiting Few-sample BERT Fine-tuning
Revisiting Few-sample BERT Fine-tuning
Tianyi Zhang
Felix Wu
Arzoo Katiyar
Kilian Q. Weinberger
Yoav Artzi
108
445
0
10 Jun 2020
On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and
  Strong Baselines
On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines
Marius Mosbach
Maksym Andriushchenko
Dietrich Klakow
110
355
0
08 Jun 2020
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan
Ana Marasović
Swabha Swayamdipta
Kyle Lo
Iz Beltagy
Doug Downey
Noah A. Smith
VLM
AI4CE
CLL
101
2,398
0
23 Apr 2020
Fine-Tuning Pretrained Language Models: Weight Initializations, Data
  Orders, and Early Stopping
Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping
Jesse Dodge
Gabriel Ilharco
Roy Schwartz
Ali Farhadi
Hannaneh Hajishirzi
Noah A. Smith
70
594
0
15 Feb 2020
Deep learning is adaptive to intrinsic dimensionality of model
  smoothness in anisotropic Besov space
Deep learning is adaptive to intrinsic dimensionality of model smoothness in anisotropic Besov space
Taiji Suzuki
Atsushi Nitanda
38
61
0
28 Oct 2019
Sampling Bias in Deep Active Classification: An Empirical Study
Sampling Bias in Deep Active Classification: An Empirical Study
Ameya Prabhu
Charles Dognin
M. Singh
38
64
0
20 Sep 2019
Discriminative Active Learning
Discriminative Active Learning
Daniel Gissin
Shai Shalev-Shwartz
31
176
0
15 Jul 2019
Low-resource Deep Entity Resolution with Transfer and Active Learning
Low-resource Deep Entity Resolution with Transfer and Active Learning
Jungo Kasai
Kun Qian
Sairam Gurajada
Yunyao Li
Lucian Popa
43
137
0
17 Jun 2019
Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds
Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds
Jordan T. Ash
Chicheng Zhang
A. Krishnamurthy
John Langford
Alekh Agarwal
BDL
UQCV
65
762
0
09 Jun 2019
Adaptivity of deep ReLU network for learning in Besov and mixed smooth
  Besov spaces: optimal rate and curse of dimensionality
Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality
Taiji Suzuki
102
243
0
18 Oct 2018
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
951
93,936
0
11 Oct 2018
Deep Bayesian Active Learning for Natural Language Processing: Results
  of a Large-Scale Empirical Study
Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study
Aditya Siddhant
Zachary Chase Lipton
AI4CE
BDL
28
202
0
16 Aug 2018
Practical Obstacles to Deploying Active Learning
Practical Obstacles to Deploying Active Learning
David Lowell
Zachary Chase Lipton
Byron C. Wallace
64
111
0
12 Jul 2018
Multi-Task Active Learning for Neural Semantic Role Labeling on Low
  Resource Conversational Corpus
Multi-Task Active Learning for Neural Semantic Role Labeling on Low Resource Conversational Corpus
Fariz Ikhwantri
Samuel Louvan
Kemal Kurniawan
Bagas Abisena
V. Rachman
A. Wicaksono
Rahmad Mahendra
45
17
0
05 Jun 2018
Deep Neural Networks Learn Non-Smooth Functions Effectively
Deep Neural Networks Learn Non-Smooth Functions Effectively
Masaaki Imaizumi
Kenji Fukumizu
116
123
0
13 Feb 2018
Function space analysis of deep learning representation layers
Function space analysis of deep learning representation layers
Oren Elisha
S. Dekel
36
4
0
09 Oct 2017
Optimal approximation of piecewise smooth functions using deep ReLU
  neural networks
Optimal approximation of piecewise smooth functions using deep ReLU neural networks
P. Petersen
Felix Voigtländer
162
473
0
15 Sep 2017
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
443
129,831
0
12 Jun 2017
Active Learning for Speech Recognition: the Power of Gradients
Active Learning for Speech Recognition: the Power of Gradients
Jiaji Huang
R. Child
Vinay Rao
Hairong Liu
S. Satheesh
Adam Coates
VLM
33
64
0
10 Dec 2016
Error bounds for approximations with deep ReLU networks
Error bounds for approximations with deep ReLU networks
Dmitry Yarotsky
135
1,226
0
03 Oct 2016
Character-level Convolutional Networks for Text Classification
Character-level Convolutional Networks for Text Classification
Xiang Zhang
Jiaqi Zhao
Yann LeCun
190
6,077
0
04 Sep 2015
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
470
9,233
0
06 Jun 2015
A Method for Stopping Active Learning Based on Stabilizing Predictions
  and the Need for User-Adjustable Stopping
A Method for Stopping Active Learning Based on Stabilizing Predictions and the Need for User-Adjustable Stopping
Michael Bloodgood
K. Vijay-Shanker
37
96
0
17 Sep 2014
1