Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.01543
Cited By
v1
v2 (latest)
QaNER: Prompting Question Answering Models for Few-shot Named Entity Recognition
3 March 2022
Andy T. Liu
Wei Xiao
Henghui Zhu
Dejiao Zhang
Shang-Wen Li
Andrew O. Arnold
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"QaNER: Prompting Question Answering Models for Few-shot Named Entity Recognition"
19 / 19 papers shown
Title
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLM
SyDa
218
3,977
0
28 Jul 2021
Template-Based Named Entity Recognition Using BART
Leyang Cui
Yu Wu
Jian Liu
Sen Yang
Yue Zhang
88
353
0
03 Jun 2021
True Few-Shot Learning with Language Models
Ethan Perez
Douwe Kiela
Kyunghyun Cho
135
437
0
24 May 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
399
1,971
0
31 Dec 2020
Few-Shot Named Entity Recognition: A Comprehensive Study
Jiaxin Huang
Chunyuan Li
K. Subudhi
Damien Jose
S. Balakrishnan
Weizhu Chen
Baolin Peng
Jianfeng Gao
Jiawei Han
72
80
0
29 Dec 2020
It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners
Timo Schick
Hinrich Schütze
128
973
0
15 Sep 2020
Example-Based Named Entity Recognition
Morteza Ziyadi
Yuting Sun
Abhishek Goswami
Jade Huang
Weizhu Chen
50
33
0
24 Aug 2020
Revisiting Few-sample BERT Fine-tuning
Tianyi Zhang
Felix Wu
Arzoo Katiyar
Kilian Q. Weinberger
Yoav Artzi
169
445
0
10 Jun 2020
UnifiedQA: Crossing Format Boundaries With a Single QA System
Daniel Khashabi
Sewon Min
Tushar Khot
Ashish Sabharwal
Oyvind Tafjord
Peter Clark
Hannaneh Hajishirzi
143
739
0
02 May 2020
Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping
Jesse Dodge
Gabriel Ilharco
Roy Schwartz
Ali Farhadi
Hannaneh Hajishirzi
Noah A. Smith
99
597
0
15 Feb 2020
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
M. Lewis
Yinhan Liu
Naman Goyal
Marjan Ghazvininejad
Abdel-rahman Mohamed
Omer Levy
Veselin Stoyanov
Luke Zettlemoyer
AIMat
VLM
260
10,848
0
29 Oct 2019
A Unified MRC Framework for Named Entity Recognition
Xiaoya Li
Jingrong Feng
Yuxian Meng
Qinghong Han
Leilei Gan
Jiwei Li
80
637
0
25 Oct 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.8K
95,114
0
11 Oct 2018
Ultra-Fine Entity Typing
Eunsol Choi
Omer Levy
Yejin Choi
Luke Zettlemoyer
AI4TS
60
210
0
13 Jul 2018
Representation Learning with Contrastive Predictive Coding
Aaron van den Oord
Yazhe Li
Oriol Vinyals
DRL
SSL
330
10,349
0
10 Jul 2018
Know What You Don't Know: Unanswerable Questions for SQuAD
Pranav Rajpurkar
Robin Jia
Percy Liang
RALM
ELM
284
2,845
0
11 Jun 2018
Accelerating Neural Transformer via an Average Attention Network
Biao Zhang
Deyi Xiong
Jinsong Su
71
120
0
02 May 2018
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
722
132,199
0
12 Jun 2017
SQuAD: 100,000+ Questions for Machine Comprehension of Text
Pranav Rajpurkar
Jian Zhang
Konstantin Lopyrev
Percy Liang
RALM
292
8,160
0
16 Jun 2016
1