ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.06354
  4. Cited By
QED: A Framework and Dataset for Explanations in Question Answering

QED: A Framework and Dataset for Explanations in Question Answering

8 September 2020
Matthew Lamm
J. Palomaki
Chris Alberti
D. Andor
Eunsol Choi
Livio Baldini Soares
Michael Collins
ArXiv (abs)PDFHTML

Papers citing "QED: A Framework and Dataset for Explanations in Question Answering"

30 / 30 papers shown
Title
A kinetic-based regularization method for data science applications
Abhisek Ganguly
Alessandro Gabbana
Vybhav Rao
Sauro Succi
Santosh Ansumali
120
1
0
06 Mar 2025
Towards Cross-Tokenizer Distillation: the Universal Logit Distillation Loss for LLMs
Towards Cross-Tokenizer Distillation: the Universal Logit Distillation Loss for LLMs
Nicolas Boizard
Kevin El Haddad
C´eline Hudelot
Pierre Colombo
134
17
0
28 Jan 2025
Multi-Level Optimal Transport for Universal Cross-Tokenizer Knowledge Distillation on Language Models
Multi-Level Optimal Transport for Universal Cross-Tokenizer Knowledge Distillation on Language Models
Xiao Cui
Mo Zhu
Yulei Qin
Liang Xie
Wengang Zhou
Haoyang Li
149
7
0
19 Dec 2024
PORT: Preference Optimization on Reasoning Traces
PORT: Preference Optimization on Reasoning Traces
Salem Lahlou
Abdalgader Abubaker
Hakim Hacid
LRM
94
5
0
23 Jun 2024
WT5?! Training Text-to-Text Models to Explain their Predictions
WT5?! Training Text-to-Text Models to Explain their Predictions
Sharan Narang
Colin Raffel
Katherine Lee
Adam Roberts
Noah Fiedel
Karishma Malkan
64
201
0
30 Apr 2020
Towards Transparent and Explainable Attention Models
Towards Transparent and Explainable Attention Models
Akash Kumar Mohankumar
Preksha Nema
Sharan Narasimhan
Mitesh M. Khapra
Balaji Vasan Srinivasan
Balaraman Ravindran
72
102
0
29 Apr 2020
AmbigQA: Answering Ambiguous Open-domain Questions
AmbigQA: Answering Ambiguous Open-domain Questions
Sewon Min
Julian Michael
Hannaneh Hajishirzi
Luke Zettlemoyer
105
316
0
22 Apr 2020
Towards Faithfully Interpretable NLP Systems: How should we define and
  evaluate faithfulness?
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?
Alon Jacovi
Yoav Goldberg
XAI
124
597
0
07 Apr 2020
Break It Down: A Question Understanding Benchmark
Break It Down: A Question Understanding Benchmark
Tomer Wolfson
Mor Geva
Ankit Gupta
Matt Gardner
Yoav Goldberg
Daniel Deutch
Jonathan Berant
75
188
0
31 Jan 2020
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
445
20,298
0
23 Oct 2019
Make Up Your Mind! Adversarial Generation of Inconsistent Natural
  Language Explanations
Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations
Oana-Maria Camburu
Brendan Shillingford
Pasquale Minervini
Thomas Lukasiewicz
Phil Blunsom
AAMLGAN
71
97
0
07 Oct 2019
WIQA: A dataset for "What if..." reasoning over procedural text
WIQA: A dataset for "What if..." reasoning over procedural text
Niket Tandon
Bhavana Dalvi
Keisuke Sakaguchi
Antoine Bosselut
Peter Clark
63
101
0
10 Sep 2019
Attention is not not Explanation
Attention is not not Explanation
Sarah Wiegreffe
Yuval Pinter
XAIAAMLFAtt
120
913
0
13 Aug 2019
SpanBERT: Improving Pre-training by Representing and Predicting Spans
SpanBERT: Improving Pre-training by Representing and Predicting Spans
Mandar Joshi
Danqi Chen
Yinhan Liu
Daniel S. Weld
Luke Zettlemoyer
Omer Levy
147
1,965
0
24 Jul 2019
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions
Christopher Clark
Kenton Lee
Ming-Wei Chang
Tom Kwiatkowski
Michael Collins
Kristina Toutanova
227
1,549
0
24 May 2019
Attention is not Explanation
Attention is not Explanation
Sarthak Jain
Byron C. Wallace
FAtt
145
1,327
0
26 Feb 2019
A BERT Baseline for the Natural Questions
A BERT Baseline for the Natural Questions
Chris Alberti
Kenton Lee
Michael Collins
ELMAI4MH
56
127
0
24 Jan 2019
Automated Rationale Generation: A Technique for Explainable AI and its
  Effects on Human Perceptions
Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions
Upol Ehsan
Pradyumna Tambwekar
Larry Chan
Brent Harrison
Mark O. Riedl
97
243
0
11 Jan 2019
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
417
640
0
04 Dec 2018
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLMSSLSSeg
1.8K
95,114
0
11 Oct 2018
HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question
  Answering
HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
Zhilin Yang
Peng Qi
Saizheng Zhang
Yoshua Bengio
William W. Cohen
Ruslan Salakhutdinov
Christopher D. Manning
RALM
182
2,689
0
25 Sep 2018
Textual Analogy Parsing: What's Shared and What's Compared among
  Analogous Facts
Textual Analogy Parsing: What's Shared and What's Compared among Analogous Facts
Matthew Lamm
Arun Tejasvi Chaganty
Christopher D. Manning
Dan Jurafsky
Percy Liang
41
24
0
07 Sep 2018
CoQA: A Conversational Question Answering Challenge
CoQA: A Conversational Question Answering Challenge
Siva Reddy
Danqi Chen
Christopher D. Manning
RALMHAI
105
1,205
0
21 Aug 2018
End-to-end Neural Coreference Resolution
End-to-end Neural Coreference Resolution
Kenton Lee
Luheng He
M. Lewis
Luke Zettlemoyer
LRMBDL
83
894
0
21 Jul 2017
Right for the Right Reasons: Training Differentiable Models by
  Constraining their Explanations
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
A. Ross
M. C. Hughes
Finale Doshi-Velez
FAtt
126
591
0
10 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAIFaML
405
3,809
0
28 Feb 2017
Rationalization: A Neural Machine Translation Approach to Generating
  Natural Language Explanations
Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations
Upol Ehsan
Brent Harrison
Larry Chan
Mark O. Riedl
118
219
0
25 Feb 2017
The Parallel Meaning Bank: Towards a Multilingual Corpus of Translations
  Annotated with Compositional Meaning Representations
The Parallel Meaning Bank: Towards a Multilingual Corpus of Translations Annotated with Compositional Meaning Representations
Lasha Abzianidze
Johannes Bjerva
Kilian Evang
Hessel Haagsma
Rik van Noord
Pierre Ludmann
Duc-Duy Nguyen
Johan Bos
3DV
53
159
0
13 Feb 2017
SQuAD: 100,000+ Questions for Machine Comprehension of Text
SQuAD: 100,000+ Questions for Machine Comprehension of Text
Pranav Rajpurkar
Jian Zhang
Konstantin Lopyrev
Percy Liang
RALM
292
8,160
0
16 Jun 2016
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
183
3,706
0
10 Jun 2016
1