ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.04725
  4. Cited By
Fool Me Twice: Entailment from Wikipedia Gamification

Fool Me Twice: Entailment from Wikipedia Gamification

10 April 2021
Julian Martin Eisenschlos
Bhuwan Dhingra
Jannis Bulian
Benjamin Borschinger
Jordan L. Boyd-Graber
ArXivPDFHTML

Papers citing "Fool Me Twice: Entailment from Wikipedia Gamification"

17 / 17 papers shown
Title
Self-Rationalization in the Wild: A Large Scale Out-of-Distribution Evaluation on NLI-related tasks
Self-Rationalization in the Wild: A Large Scale Out-of-Distribution Evaluation on NLI-related tasks
Jing Yang
Max Glockner
Anderson de Rezende Rocha
Iryna Gurevych
LRM
78
1
0
07 Feb 2025
Claim Verification in the Age of Large Language Models: A Survey
Claim Verification in the Age of Large Language Models: A Survey
A. Dmonte
Roland Oruche
Marcos Zampieri
Prasad Calyam
Isabelle Augenstein
54
8
0
26 Aug 2024
More Victories, Less Cooperation: Assessing Cicero's Diplomacy Play
More Victories, Less Cooperation: Assessing Cicero's Diplomacy Play
Wichayaporn Wongkamjan
Feng Gu
Yanze Wang
Ulf Hermjakob
Jonathan May
Brandon M. Stewart
Jonathan K. Kummerfeld
Denis Peskoff
Jordan L. Boyd-Graber
53
3
0
07 Jun 2024
How the Advent of Ubiquitous Large Language Models both Stymie and
  Turbocharge Dynamic Adversarial Question Generation
How the Advent of Ubiquitous Large Language Models both Stymie and Turbocharge Dynamic Adversarial Question Generation
Yoo Yeon Sung
Ishani Mondal
Jordan L. Boyd-Graber
32
0
0
20 Jan 2024
Language Models Hallucinate, but May Excel at Fact Verification
Language Models Hallucinate, but May Excel at Fact Verification
Jian Guan
Jesse Dodge
David Wadden
Minlie Huang
Hao Peng
LRM
HILM
40
28
0
23 Oct 2023
Continually Improving Extractive QA via Human Feedback
Continually Improving Extractive QA via Human Feedback
Ge Gao
Hung-Ting Chen
Yoav Artzi
Eunsol Choi
31
12
0
21 May 2023
Augmented Large Language Models with Parametric Knowledge Guiding
Augmented Large Language Models with Parametric Knowledge Guiding
Ziyang Luo
Can Xu
Pu Zhao
Xiubo Geng
Chongyang Tao
Jing Ma
Qingwei Lin
Daxin Jiang
KELM
RALM
43
44
0
08 May 2023
Missing Counter-Evidence Renders NLP Fact-Checking Unrealistic for
  Misinformation
Missing Counter-Evidence Renders NLP Fact-Checking Unrealistic for Misinformation
Max Glockner
Yufang Hou
Iryna Gurevych
OffRL
45
38
0
25 Oct 2022
Generate rather than Retrieve: Large Language Models are Strong Context
  Generators
Generate rather than Retrieve: Large Language Models are Strong Context Generators
Wenhao Yu
Dan Iter
Shuohang Wang
Yichong Xu
Mingxuan Ju
Soumya Sanyal
Chenguang Zhu
Michael Zeng
Meng Jiang
RALM
AIMat
242
322
0
21 Sep 2022
Synthetic Disinformation Attacks on Automated Fact Verification Systems
Synthetic Disinformation Attacks on Automated Fact Verification Systems
Y. Du
Antoine Bosselut
Christopher D. Manning
AAML
OffRL
36
32
0
18 Feb 2022
CommonsenseQA 2.0: Exposing the Limits of AI through Gamification
CommonsenseQA 2.0: Exposing the Limits of AI through Gamification
Alon Talmor
Ori Yoran
Ronan Le Bras
Chandrasekhar Bhagavatula
Yoav Goldberg
Yejin Choi
Jonathan Berant
ELM
33
141
0
14 Jan 2022
Mention Memory: incorporating textual knowledge into Transformers
  through entity mention attention
Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Michiel de Jong
Yury Zemlyanskiy
Nicholas FitzGerald
Fei Sha
William W. Cohen
RALM
29
46
0
12 Oct 2021
Explainable Fact-checking through Question Answering
Explainable Fact-checking through Question Answering
Jing Yang
D. Vega-Oliveros
Taís Seibt
Anderson de Rezende Rocha
HILM
32
14
0
11 Oct 2021
CREAK: A Dataset for Commonsense Reasoning over Entity Knowledge
CREAK: A Dataset for Commonsense Reasoning over Entity Knowledge
Yasumasa Onoe
Michael J.Q. Zhang
Eunsol Choi
Greg Durrett
HILM
40
85
0
03 Sep 2021
A Survey on Automated Fact-Checking
A Survey on Automated Fact-Checking
Zhijiang Guo
M. Schlichtkrull
Andreas Vlachos
34
460
0
26 Aug 2021
FaVIQ: FAct Verification from Information-seeking Questions
FaVIQ: FAct Verification from Information-seeking Questions
Jungsoo Park
Sewon Min
Jaewoo Kang
Luke Zettlemoyer
Hannaneh Hajishirzi
HILM
42
37
0
05 Jul 2021
Hypothesis Only Baselines in Natural Language Inference
Hypothesis Only Baselines in Natural Language Inference
Adam Poliak
Jason Naradowsky
Aparajita Haldar
Rachel Rudinger
Benjamin Van Durme
194
576
0
02 May 2018
1