ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.00537
  4. Cited By
SuperGLUE: A Stickier Benchmark for General-Purpose Language
  Understanding Systems
v1v2v3 (latest)

SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems

2 May 2019
Alex Jinpeng Wang
Yada Pruksachatkun
Nikita Nangia
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
    ELM
ArXiv (abs)PDFHTML

Papers citing "SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems"

50 / 1,500 papers shown
Title
AI and the Everything in the Whole Wide World Benchmark
AI and the Everything in the Whole Wide World Benchmark
Inioluwa Deborah Raji
Emily M. Bender
Amandalynne Paullada
Emily L. Denton
A. Hanna
100
314
0
26 Nov 2021
True Few-Shot Learning with Prompts -- A Real-World Perspective
True Few-Shot Learning with Prompts -- A Real-World Perspective
Timo Schick
Hinrich Schütze
VLM
113
64
0
26 Nov 2021
Few-shot Named Entity Recognition with Cloze Questions
Few-shot Named Entity Recognition with Cloze Questions
V. Gatta
V. Moscato
Marco Postiglione
Giancarlo Sperlí
56
4
0
24 Nov 2021
ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
V. Aribandi
Yi Tay
Tal Schuster
J. Rao
H. Zheng
...
Jianmo Ni
Jai Gupta
Kai Hui
Sebastian Ruder
Donald Metzler
MoE
125
216
0
22 Nov 2021
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with
  Gradient-Disentangled Embedding Sharing
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
Pengcheng He
Jianfeng Gao
Weizhu Chen
225
1,212
0
18 Nov 2021
Few-Shot Self-Rationalization with Natural Language Prompts
Few-Shot Self-Rationalization with Natural Language Prompts
Ana Marasović
Iz Beltagy
Doug Downey
Matthew E. Peters
LRM
91
110
0
16 Nov 2021
Adversarially Constructed Evaluation Sets Are More Challenging, but May
  Not Be Fair
Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair
Jason Phang
Angelica Chen
William Huang
Samuel R. Bowman
AAML
76
14
0
16 Nov 2021
Testing the Generalization of Neural Language Models for COVID-19
  Misinformation Detection
Testing the Generalization of Neural Language Models for COVID-19 Misinformation Detection
Jan Philip Wahle
Nischal Ashok Kumar
Terry Ruas
Norman Meuschke
Tirthankar Ghosal
Bela Gipp
82
19
0
15 Nov 2021
Personalized Benchmarking with the Ludwig Benchmarking Toolkit
Personalized Benchmarking with the Ludwig Benchmarking Toolkit
A. Narayan
Piero Molino
Karan Goel
Willie Neiswanger
Christopher Ré
76
11
0
08 Nov 2021
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of
  Language Models
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Wei Ping
Chejian Xu
Shuohang Wang
Zhe Gan
Yu Cheng
Jianfeng Gao
Ahmed Hassan Awadallah
Yangqiu Song
VLMELMAAML
78
226
0
04 Nov 2021
Benchmarking Multimodal AutoML for Tabular Data with Text Fields
Benchmarking Multimodal AutoML for Tabular Data with Text Fields
Xingjian Shi
Jonas W. Mueller
Nick Erickson
Mu Li
Alexander J. Smola
LMTD
79
31
0
04 Nov 2021
CLUES: Few-Shot Learning Evaluation in Natural Language Understanding
CLUES: Few-Shot Learning Evaluation in Natural Language Understanding
Subhabrata Mukherjee
Xiaodong Liu
Guoqing Zheng
Saghar Hosseini
Hao Cheng
Greg Yang
Christopher Meek
Ahmed Hassan Awadallah
Jianfeng Gao
ELM
70
11
0
04 Nov 2021
OpenPrompt: An Open-source Framework for Prompt-learning
OpenPrompt: An Open-source Framework for Prompt-learning
Ning Ding
Shengding Hu
Weilin Zhao
Yulin Chen
Zhiyuan Liu
Haitao Zheng
Maosong Sun
VLMLLMAG
108
299
0
03 Nov 2021
Adapting to the Long Tail: A Meta-Analysis of Transfer Learning Research
  for Language Understanding Tasks
Adapting to the Long Tail: A Meta-Analysis of Transfer Learning Research for Language Understanding Tasks
Aakanksha Naik
J. Lehman
Carolyn Rose
96
7
0
02 Nov 2021
Towards Tractable Mathematical Reasoning: Challenges, Strategies, and
  Opportunities for Solving Math Word Problems
Towards Tractable Mathematical Reasoning: Challenges, Strategies, and Opportunities for Solving Math Word Problems
Keyur Faldu
A. Sheth
Prashant Kikani
Manas Gaur
Aditi Avasthi
LRM
62
17
0
29 Oct 2021
Training Verifiers to Solve Math Word Problems
Training Verifiers to Solve Math Word Problems
K. Cobbe
V. Kosaraju
Mohammad Bavarian
Mark Chen
Heewoo Jun
...
Jerry Tworek
Jacob Hilton
Reiichiro Nakano
Christopher Hesse
John Schulman
ReLMOffRLLRM
417
4,606
0
27 Oct 2021
Connect-the-Dots: Bridging Semantics between Words and Definitions via
  Aligning Word Sense Inventories
Connect-the-Dots: Bridging Semantics between Words and Definitions via Aligning Word Sense Inventories
Wenlin Yao
Xiaoman Pan
Lifeng Jin
Jianshu Chen
Dian Yu
Dong Yu
46
7
0
27 Oct 2021
SILG: The Multi-environment Symbolic Interactive Language Grounding
  Benchmark
SILG: The Multi-environment Symbolic Interactive Language Grounding Benchmark
Victor Zhong
Austin W. Hanjie
Sida Wang
Karthik Narasimhan
Luke Zettlemoyer
32
12
0
20 Oct 2021
Ranking and Tuning Pre-trained Models: A New Paradigm for Exploiting
  Model Hubs
Ranking and Tuning Pre-trained Models: A New Paradigm for Exploiting Model Hubs
Kaichao You
Yong Liu
Ziyang Zhang
Jianmin Wang
Michael I. Jordan
Mingsheng Long
226
34
0
20 Oct 2021
SLAM: A Unified Encoder for Speech and Language Modeling via Speech-Text
  Joint Pre-Training
SLAM: A Unified Encoder for Speech and Language Modeling via Speech-Text Joint Pre-Training
Ankur Bapna
Yu-An Chung
Na Wu
Anmol Gulati
Ye Jia
J. Clark
Melvin Johnson
Jason Riesa
Alexis Conneau
Yu Zhang
VLM
137
96
0
20 Oct 2021
Self-Supervised Representation Learning: Introduction, Advances and
  Challenges
Self-Supervised Representation Learning: Introduction, Advances and Challenges
Linus Ericsson
Henry Gouk
Chen Change Loy
Timothy M. Hospedales
SSLOODAI4TS
91
280
0
18 Oct 2021
BEAMetrics: A Benchmark for Language Generation Evaluation Evaluation
BEAMetrics: A Benchmark for Language Generation Evaluation Evaluation
Thomas Scialom
Felix Hill
60
7
0
18 Oct 2021
Deep Transfer Learning & Beyond: Transformer Language Models in
  Information Systems Research
Deep Transfer Learning & Beyond: Transformer Language Models in Information Systems Research
Ross Gruetzemacher
D. Paradice
78
35
0
18 Oct 2021
Schrödinger's Tree -- On Syntax and Neural Language Models
Schrödinger's Tree -- On Syntax and Neural Language Models
Artur Kulmizev
Joakim Nivre
70
6
0
17 Oct 2021
Hey AI, Can You Solve Complex Tasks by Talking to Agents?
Hey AI, Can You Solve Complex Tasks by Talking to Agents?
Tushar Khot
Kyle Richardson
Daniel Khashabi
Ashish Sabharwal
RALMLRM
67
14
0
16 Oct 2021
Sharpness-Aware Minimization Improves Language Model Generalization
Sharpness-Aware Minimization Improves Language Model Generalization
Dara Bahri
H. Mobahi
Yi Tay
182
104
0
16 Oct 2021
A Short Study on Compressing Decoder-Based Language Models
A Short Study on Compressing Decoder-Based Language Models
Tianda Li
Yassir El Mesbahi
I. Kobyzev
Ahmad Rashid
A. Mahmud
Nithin Anchuri
Habib Hajimolahoseini
Yang Liu
Mehdi Rezagholizadeh
151
25
0
16 Oct 2021
Knowledge Enhanced Pretrained Language Models: A Compreshensive Survey
Knowledge Enhanced Pretrained Language Models: A Compreshensive Survey
Xiaokai Wei
Shen Wang
Dejiao Zhang
Parminder Bhatia
Andrew O. Arnold
KELM
93
46
0
16 Oct 2021
Unsupervised Natural Language Inference Using PHL Triplet Generation
Unsupervised Natural Language Inference Using PHL Triplet Generation
Neeraj Varshney
Pratyay Banerjee
Tejas Gokhale
Chitta Baral
78
9
0
16 Oct 2021
The Dangers of Underclaiming: Reasons for Caution When Reporting How NLP
  Systems Fail
The Dangers of Underclaiming: Reasons for Caution When Reporting How NLP Systems Fail
Sam Bowman
OffRL
115
45
0
15 Oct 2021
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Cer
VLMLRM
221
290
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
299
863
0
14 Oct 2021
Transferring Semantic Knowledge Into Language Encoders
Transferring Semantic Knowledge Into Language Encoders
Mohammad Umair
Francis Ferraro
29
1
0
14 Oct 2021
Towards Efficient NLP: A Standard Evaluation and A Strong Baseline
Towards Efficient NLP: A Standard Evaluation and A Strong Baseline
Xiangyang Liu
Tianxiang Sun
Junliang He
Jiawen Wu
Lingling Wu
Xinyu Zhang
Hao Jiang
Bo Zhao
Xuanjing Huang
Xipeng Qiu
ELM
85
47
0
13 Oct 2021
Leveraging redundancy in attention with Reuse Transformers
Leveraging redundancy in attention with Reuse Transformers
Srinadh Bhojanapalli
Ayan Chakrabarti
Andreas Veit
Michal Lukasik
Himanshu Jain
Frederick Liu
Yin-Wen Chang
Sanjiv Kumar
47
27
0
13 Oct 2021
Småprat: DialoGPT for Natural Language Generation of Swedish
  Dialogue by Transfer Learning
Småprat: DialoGPT for Natural Language Generation of Swedish Dialogue by Transfer Learning
Tosin Adewumi
Rickard Brannvall
Nosheen Abid
M. Pahlavan
Sana Sabah Sabry
F. Liwicki
Marcus Liwicki
50
21
0
12 Oct 2021
Relative Molecule Self-Attention Transformer
Relative Molecule Self-Attention Transformer
Lukasz Maziarka
Dawid Majchrowski
Tomasz Danel
Piotr Gaiñski
Jacek Tabor
Igor T. Podolak
Pawel M. Morkisz
Stanislaw Jastrzebski
MedIm
87
36
0
12 Oct 2021
A Few More Examples May Be Worth Billions of Parameters
A Few More Examples May Be Worth Billions of Parameters
Yuval Kirstain
Patrick Lewis
Sebastian Riedel
Omer Levy
124
21
0
08 Oct 2021
Towards a Unified View of Parameter-Efficient Transfer Learning
Towards a Unified View of Parameter-Efficient Transfer Learning
Junxian He
Chunting Zhou
Xuezhe Ma
Taylor Berg-Kirkpatrick
Graham Neubig
AAML
170
958
0
08 Oct 2021
CLEVA-Compass: A Continual Learning EValuation Assessment Compass to
  Promote Research Transparency and Comparability
CLEVA-Compass: A Continual Learning EValuation Assessment Compass to Promote Research Transparency and Comparability
Martin Mundt
Steven Braun
Quentin Delfosse
Kristian Kersting
75
36
0
07 Oct 2021
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English
Ilias Chalkidis
Abhik Jana
D. Hartung
M. Bommarito
Ion Androutsopoulos
Daniel Martin Katz
Nikolaos Aletras
AILawELM
280
267
0
03 Oct 2021
Swiss-Judgment-Prediction: A Multilingual Legal Judgment Prediction
  Benchmark
Swiss-Judgment-Prediction: A Multilingual Legal Judgment Prediction Benchmark
Joel Niklaus
Ilias Chalkidis
Matthias Sturmer
ELMAILaw
67
70
0
02 Oct 2021
RAFT: A Real-World Few-Shot Text Classification Benchmark
RAFT: A Real-World Few-Shot Text Classification Benchmark
Neel Alex
Eli Lifland
Lewis Tunstall
A. Thakur
Pegah Maham
...
Carolyn Ashurst
Paul Sedille
A. Carlier
M. Noetel
Andreas Stuhlmuller
RALM
217
56
0
28 Sep 2021
FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural
  Language Understanding
FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Yanan Zheng
Jing Zhou
Yujie Qian
Ming Ding
Chonghua Liao
Jian Li
Ruslan Salakhutdinov
Jie Tang
Sebastian Ruder
Zhilin Yang
ELM
278
29
0
27 Sep 2021
Distiller: A Systematic Study of Model Distillation Methods in Natural
  Language Processing
Distiller: A Systematic Study of Model Distillation Methods in Natural Language Processing
Haoyu He
Xingjian Shi
Jonas W. Mueller
Zha Sheng
Mu Li
George Karypis
69
9
0
23 Sep 2021
Small-Bench NLP: Benchmark for small single GPU trained models in
  Natural Language Processing
Small-Bench NLP: Benchmark for small single GPU trained models in Natural Language Processing
K. Kanakarajan
Bhuvana Kundumani
Malaikannan Sankarasubbu
ALMMoE
59
5
0
22 Sep 2021
Scale Efficiently: Insights from Pre-training and Fine-tuning
  Transformers
Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Yi Tay
Mostafa Dehghani
J. Rao
W. Fedus
Samira Abnar
Hyung Won Chung
Sharan Narang
Dani Yogatama
Ashish Vaswani
Donald Metzler
283
115
0
22 Sep 2021
RAIL-KD: RAndom Intermediate Layer Mapping for Knowledge Distillation
RAIL-KD: RAndom Intermediate Layer Mapping for Knowledge Distillation
Md. Akmal Haidar
Nithin Anchuri
Mehdi Rezagholizadeh
Abbas Ghaddar
Philippe Langlais
Pascal Poupart
111
22
0
21 Sep 2021
Knowledge Distillation with Noisy Labels for Natural Language
  Understanding
Knowledge Distillation with Noisy Labels for Natural Language Understanding
Shivendra Bhardwaj
Abbas Ghaddar
Ahmad Rashid
Khalil Bibi
Cheng-huan Li
A. Ghodsi
Philippe Langlais
Mehdi Rezagholizadeh
53
1
0
21 Sep 2021
ConvFiT: Conversational Fine-Tuning of Pretrained Language Models
ConvFiT: Conversational Fine-Tuning of Pretrained Language Models
Ivan Vulić
Pei-hao Su
Sam Coope
D. Gerz
Paweł Budzianowski
I. Casanueva
Nikola Mrkvsić
Tsung-Hsien Wen
100
37
0
21 Sep 2021
Previous
123...222324...282930
Next