ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.03551
  4. Cited By
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for
  Reading Comprehension
v1v2 (latest)

TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension

9 May 2017
Mandar Joshi
Eunsol Choi
Daniel S. Weld
Luke Zettlemoyer
    RALM
ArXiv (abs)PDFHTML

Papers citing "TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension"

50 / 1,823 papers shown
Title
ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic
  Distillation Generalization
ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization
Weixin Liu
Xuyi Chen
Jiaxiang Liu
Shi Feng
Yu Sun
Hao Tian
Hua Wu
95
2
0
09 Jan 2023
Universal Information Extraction as Unified Semantic Matching
Universal Information Extraction as Unified Semantic Matching
Jie Lou
Yaojie Lu
Dai Dai
Wei Jia
Hongyu Lin
Xianpei Han
Le Sun
Hua Wu
82
72
0
09 Jan 2023
Integrating Semantic Information into Sketchy Reading Module of
  Retro-Reader for Vietnamese Machine Reading Comprehension
Integrating Semantic Information into Sketchy Reading Module of Retro-Reader for Vietnamese Machine Reading Comprehension
Hang Le
Viet-Duc Ho
Duc-Vu Nguyen
Ngan Luu-Thuy Nguyen
72
2
0
01 Jan 2023
A Survey on Knowledge-Enhanced Pre-trained Language Models
A Survey on Knowledge-Enhanced Pre-trained Language Models
Chaoqi Zhen
Yanlei Shang
Xiangyu Liu
Yifei Li
Yong Chen
Dell Zhang
VLMKELM
96
3
0
27 Dec 2022
Large Language Models Encode Clinical Knowledge
Large Language Models Encode Clinical Knowledge
K. Singhal
Shekoofeh Azizi
T. Tu
S. S. Mahdavi
Jason W. Wei
...
A. Rajkomar
Joelle Barral
Christopher Semturs
Alan Karthikesalingam
Vivek Natarajan
LM&MAELMAI4MH
355
2,421
0
26 Dec 2022
Pretraining Without Attention
Pretraining Without Attention
Junxiong Wang
J. Yan
Albert Gu
Alexander M. Rush
96
49
0
20 Dec 2022
SLUE Phase-2: A Benchmark Suite of Diverse Spoken Language Understanding
  Tasks
SLUE Phase-2: A Benchmark Suite of Diverse Spoken Language Understanding Tasks
Suwon Shon
Siddhant Arora
Chyi-Jiunn Lin
Ankita Pasad
Felix Wu
Roshan S. Sharma
Wei Wu
Hung-yi Lee
Karen Livescu
Shinji Watanabe
ELM
85
33
0
20 Dec 2022
To Adapt or to Annotate: Challenges and Interventions for Domain
  Adaptation in Open-Domain Question Answering
To Adapt or to Annotate: Challenges and Interventions for Domain Adaptation in Open-Domain Question Answering
Dheeru Dua
Emma Strubell
Sameer Singh
Pat Verga
OOD
98
3
0
20 Dec 2022
What Are You Token About? Dense Retrieval as Distributions Over the
  Vocabulary
What Are You Token About? Dense Retrieval as Distributions Over the Vocabulary
Ori Ram
L. Bezalel
Adi Zicher
Yonatan Belinkov
Jonathan Berant
Amir Globerson
107
37
0
20 Dec 2022
Self-Adaptive In-Context Learning: An Information Compression
  Perspective for In-Context Example Selection and Ordering
Self-Adaptive In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering
Zhiyong Wu
Yaoxiang Wang
Jiacheng Ye
Lingpeng Kong
136
141
0
20 Dec 2022
Defending Against Disinformation Attacks in Open-Domain Question
  Answering
Defending Against Disinformation Attacks in Open-Domain Question Answering
Orion Weller
Aleem Khan
Nathaniel Weir
Dawn J Lawrie
Benjamin Van Durme
AAML
163
7
0
20 Dec 2022
Tokenization Consistency Matters for Generative Models on Extractive NLP
  Tasks
Tokenization Consistency Matters for Generative Models on Extractive NLP Tasks
Kaiser Sun
Peng Qi
Yuhao Zhang
Lan Liu
William Yang Wang
Zhiheng Huang
85
9
0
19 Dec 2022
Evaluating Human-Language Model Interaction
Evaluating Human-Language Model Interaction
Mina Lee
Megha Srivastava
Amelia Hardy
John Thickstun
Esin Durmus
...
Hancheng Cao
Tony Lee
Rishi Bommasani
Michael S. Bernstein
Percy Liang
LM&MAALM
114
102
0
19 Dec 2022
One Embedder, Any Task: Instruction-Finetuned Text Embeddings
One Embedder, Any Task: Instruction-Finetuned Text Embeddings
Hongjin Su
Weijia Shi
Jungo Kasai
Yizhong Wang
Yushi Hu
Mari Ostendorf
Wen-tau Yih
Noah A. Smith
Luke Zettlemoyer
Tao Yu
121
303
0
19 Dec 2022
Rarely a problem? Language models exhibit inverse scaling in their
  predictions following few-type quantifiers
Rarely a problem? Language models exhibit inverse scaling in their predictions following few-type quantifiers
J. Michaelov
Benjamin Bergen
44
17
0
16 Dec 2022
Self-Prompting Large Language Models for Zero-Shot Open-Domain QA
Self-Prompting Large Language Models for Zero-Shot Open-Domain QA
Junlong Li
Jinyuan Wang
Zhuosheng Zhang
Hai Zhao
LRM
97
38
0
16 Dec 2022
FiDO: Fusion-in-Decoder optimized for stronger performance and faster
  inference
FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Michiel de Jong
Yury Zemlyanskiy
Joshua Ainslie
Nicholas FitzGerald
Sumit Sanghai
Fei Sha
William W. Cohen
VLM
77
36
0
15 Dec 2022
Attributed Question Answering: Evaluation and Modeling for Attributed
  Large Language Models
Attributed Question Answering: Evaluation and Modeling for Attributed Large Language Models
Bernd Bohnet
Vinh Q. Tran
Pat Verga
Roee Aharoni
D. Andor
...
Michael Collins
Dipanjan Das
Donald Metzler
Slav Petrov
Kellie Webster
126
65
0
15 Dec 2022
CLAM: Selective Clarification for Ambiguous Questions with Generative
  Language Models
CLAM: Selective Clarification for Ambiguous Questions with Generative Language Models
Lorenz Kuhn
Y. Gal
Sebastian Farquhar
87
41
0
15 Dec 2022
APOLLO: An Optimized Training Approach for Long-form Numerical Reasoning
APOLLO: An Optimized Training Approach for Long-form Numerical Reasoning
Jiashuo Sun
Hang Zhang
Chen Lin
Nan Duan
Yeyun Gong
Jian Guo
AIMatRALM
78
6
0
14 Dec 2022
Structured Prompting: Scaling In-Context Learning to 1,000 Examples
Structured Prompting: Scaling In-Context Learning to 1,000 Examples
Y. Hao
Yutao Sun
Li Dong
Zhixiong Han
Yuxian Gu
Furu Wei
LRM
66
75
0
13 Dec 2022
BigText-QA: Question Answering over a Large-Scale Hybrid Knowledge Graph
BigText-QA: Question Answering over a Large-Scale Hybrid Knowledge Graph
Jingjing Xu
M. Biryukov
Martin Theobald
V. Venugopal
71
0
0
12 Dec 2022
Momentum Contrastive Pre-training for Question Answering
Momentum Contrastive Pre-training for Question Answering
Minda Hu
Muzhi Li
Yasheng Wang
Irwin King
96
3
0
12 Dec 2022
From Cloze to Comprehension: Retrofitting Pre-trained Masked Language
  Model to Pre-trained Machine Reader
From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Model to Pre-trained Machine Reader
Weiwen Xu
Xin Li
Wenxuan Zhang
Meng Zhou
W. Lam
Luo Si
Lidong Bing
90
2
0
09 Dec 2022
A Comprehensive Survey on Multi-hop Machine Reading Comprehension
  Approaches
A Comprehensive Survey on Multi-hop Machine Reading Comprehension Approaches
A. Mohammadi
Reza Ramezani
Ahmad Baraani
65
3
0
08 Dec 2022
A Comprehensive Survey on Multi-hop Machine Reading Comprehension
  Datasets and Metrics
A Comprehensive Survey on Multi-hop Machine Reading Comprehension Datasets and Metrics
A. Mohammadi
Reza Ramezani
Ahmad Baraani
80
1
0
08 Dec 2022
DeepSpeed Data Efficiency: Improving Deep Learning Model Quality and
  Training Efficiency via Efficient Data Sampling and Routing
DeepSpeed Data Efficiency: Improving Deep Learning Model Quality and Training Efficiency via Efficient Data Sampling and Routing
Conglong Li
Z. Yao
Xiaoxia Wu
Minjia Zhang
Connor Holmes
Cheng Li
Yuxiong He
69
25
0
07 Dec 2022
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural
  Pairwise Conditional Random Field
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
Chengyue Jiang
Yong Jiang
Weiqi Wu
Pengjun Xie
Kewei Tu
59
5
0
03 Dec 2022
Nonparametric Masked Language Modeling
Nonparametric Masked Language Modeling
Sewon Min
Weijia Shi
M. Lewis
Xilun Chen
Wen-tau Yih
Hannaneh Hajishirzi
Luke Zettlemoyer
RALM
163
51
0
02 Dec 2022
Penalizing Confident Predictions on Largely Perturbed Inputs Does Not
  Improve Out-of-Distribution Generalization in Question Answering
Penalizing Confident Predictions on Largely Perturbed Inputs Does Not Improve Out-of-Distribution Generalization in Question Answering
Kazutoshi Shinoda
Saku Sugawara
Akiko Aizawa
OODAAML
55
0
0
29 Nov 2022
Can Open-Domain QA Reader Utilize External Knowledge Efficiently like
  Humans?
Can Open-Domain QA Reader Utilize External Knowledge Efficiently like Humans?
Neeraj Varshney
Man Luo
Chitta Baral
RALM
65
12
0
23 Nov 2022
FiE: Building a Global Probability Space by Leveraging Early Fusion in
  Encoder for Open-Domain Question Answering
FiE: Building a Global Probability Space by Leveraging Early Fusion in Encoder for Open-Domain Question Answering
Akhil Kedia
Mohd Abbas Zaidi
Haejun Lee
67
10
0
18 Nov 2022
Random-LTD: Random and Layerwise Token Dropping Brings Efficient
  Training for Large-scale Transformers
Random-LTD: Random and Layerwise Token Dropping Brings Efficient Training for Large-scale Transformers
Z. Yao
Xiaoxia Wu
Conglong Li
Connor Holmes
Minjia Zhang
Cheng-rong Li
Yuxiong He
87
12
0
17 Nov 2022
Data-Efficient Autoregressive Document Retrieval for Fact Verification
Data-Efficient Autoregressive Document Retrieval for Fact Verification
James Thorne
RALM
70
7
0
17 Nov 2022
Task-aware Retrieval with Instructions
Task-aware Retrieval with Instructions
Akari Asai
Timo Schick
Patrick Lewis
Xilun Chen
Gautier Izacard
Sebastian Riedel
Hannaneh Hajishirzi
Wen-tau Yih
111
98
0
16 Nov 2022
Large Language Models Struggle to Learn Long-Tail Knowledge
Large Language Models Struggle to Learn Long-Tail Knowledge
Nikhil Kandpal
H. Deng
Adam Roberts
Eric Wallace
Colin Raffel
RALMKELM
166
420
0
15 Nov 2022
Empowering Language Models with Knowledge Graph Reasoning for Question
  Answering
Empowering Language Models with Knowledge Graph Reasoning for Question Answering
Ziniu Hu
Yichong Xu
Wenhao Yu
Shuohang Wang
Ziyi Yang
Chenguang Zhu
Kai-Wei Chang
Yizhou Sun
KELMRALMLRM
102
26
0
15 Nov 2022
A Survey for Efficient Open Domain Question Answering
A Survey for Efficient Open Domain Question Answering
Qin Zhang
Shan Chen
Dongkuan Xu
Qingqing Cao
Xiaojun Chen
Trevor Cohn
Meng Fang
90
36
0
15 Nov 2022
Calibrated Interpretation: Confidence Estimation in Semantic Parsing
Calibrated Interpretation: Confidence Estimation in Semantic Parsing
Elias Stengel-Eskin
Benjamin Van Durme
UQLM
164
25
0
14 Nov 2022
A Survey of Knowledge Enhanced Pre-trained Language Models
A Survey of Knowledge Enhanced Pre-trained Language Models
Linmei Hu
Zeyi Liu
Ziwang Zhao
Lei Hou
Liqiang Nie
Juanzi Li
KELMVLM
170
138
0
11 Nov 2022
Large Language Models with Controllable Working Memory
Large Language Models with Controllable Working Memory
Daliang Li
A. S. Rawat
Manzil Zaheer
Xin Wang
Michal Lukasik
Andreas Veit
Felix X. Yu
Surinder Kumar
KELM
146
171
0
09 Nov 2022
Passage-Mask: A Learnable Regularization Strategy for Retriever-Reader
  Models
Passage-Mask: A Learnable Regularization Strategy for Retriever-Reader Models
Shujian Zhang
Chengyue Gong
Xingchao Liu
RALM
150
6
0
02 Nov 2022
Two-stage LLM Fine-tuning with Less Specialization and More
  Generalization
Two-stage LLM Fine-tuning with Less Specialization and More Generalization
Yihan Wang
Si Si
Daliang Li
Michal Lukasik
Felix X. Yu
Cho-Jui Hsieh
Inderjit S Dhillon
Sanjiv Kumar
137
30
0
01 Nov 2022
Reduce Catastrophic Forgetting of Dense Retrieval Training with
  Teleportation Negatives
Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives
Si Sun
Chenyan Xiong
Yue Yu
Arnold Overwijk
Zhiyuan Liu
Jie Bao
84
6
0
31 Oct 2022
An Efficient Memory-Augmented Transformer for Knowledge-Intensive NLP
  Tasks
An Efficient Memory-Augmented Transformer for Knowledge-Intensive NLP Tasks
Yuxiang Wu
Yu Zhao
Baotian Hu
Pasquale Minervini
Pontus Stenetorp
Sebastian Riedel
RALMKELM
101
45
0
30 Oct 2022
What Language Model to Train if You Have One Million GPU Hours?
What Language Model to Train if You Have One Million GPU Hours?
Teven Le Scao
Thomas Wang
Daniel Hesslow
Lucile Saulnier
Stas Bekman
...
Lintang Sutawika
Jaesung Tae
Zheng-Xin Yong
Julien Launay
Iz Beltagy
MoEAI4CE
320
109
0
27 Oct 2022
TASA: Deceiving Question Answering Models by Twin Answer Sentences
  Attack
TASA: Deceiving Question Answering Models by Twin Answer Sentences Attack
Yu Cao
Dianqi Li
Meng Fang
Dinesh Manocha
Jun Gao
Yibing Zhan
Dacheng Tao
AAML
83
17
0
27 Oct 2022
Analyzing Multi-Task Learning for Abstractive Text Summarization
Analyzing Multi-Task Learning for Abstractive Text Summarization
Frederic Kirstein
Jan Philip Wahle
Terry Ruas
Bela Gipp
81
4
0
26 Oct 2022
RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question
  Answering
RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering
Victor Zhong
Weijia Shi
Wen-tau Yih
Luke Zettlemoyer
106
21
0
25 Oct 2022
Rich Knowledge Sources Bring Complex Knowledge Conflicts: Recalibrating
  Models to Reflect Conflicting Evidence
Rich Knowledge Sources Bring Complex Knowledge Conflicts: Recalibrating Models to Reflect Conflicting Evidence
Hung-Ting Chen
Michael J.Q. Zhang
Eunsol Choi
RALMHILM
141
100
0
25 Oct 2022
Previous
123...232425...353637
Next