ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.00562
  4. Cited By
ISAAQ -- Mastering Textbook Questions with Pre-trained Transformers and
  Bottom-Up and Top-Down Attention

ISAAQ -- Mastering Textbook Questions with Pre-trained Transformers and Bottom-Up and Top-Down Attention

1 October 2020
José Manuél Gómez-Pérez
Raúl Ortega
ArXivPDFHTML

Papers citing "ISAAQ -- Mastering Textbook Questions with Pre-trained Transformers and Bottom-Up and Top-Down Attention"

7 / 7 papers shown
Title
Towards Language-driven Scientific AI
Towards Language-driven Scientific AI
José Manuél Gómez-Pérez
34
0
0
27 Oct 2022
Artificial Intelligence and Natural Language Processing and
  Understanding in Space: A Methodological Framework and Four ESA Case Studies
Artificial Intelligence and Natural Language Processing and Understanding in Space: A Methodological Framework and Four ESA Case Studies
José Manuél Gómez-Pérez
Andrés García-Silva
R. Leone
M. Albani
Moritz Fontaine
C. Poncet
L. Summerer
A. Donati
Ilaria Roma
Stefano Scaglioni
18
1
0
07 Oct 2022
MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided
  Multimodal Attention for Textbook Question Answering
MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Fangzhi Xu
Qika Lin
Jun Liu
Lingling Zhang
Tianzhe Zhao
Qianyi Chai
Yudai Pan
14
2
0
06 Dec 2021
Perhaps PTLMs Should Go to School -- A Task to Assess Open Book and
  Closed Book QA
Perhaps PTLMs Should Go to School -- A Task to Assess Open Book and Closed Book QA
Manuel R. Ciosici
Joe Cecil
Alex Hedges
Dong-Ho Lee
Marjorie Freedman
R. Weischedel
30
9
0
04 Oct 2021
RL-CSDia: Representation Learning of Computer Science Diagrams
RL-CSDia: Representation Learning of Computer Science Diagrams
Shaowei Wang
LingLing Zhang
Xuan Luo
Yi Yang
Xin Hu
Jun Liu
3DV
15
2
0
10 Mar 2021
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,833
0
17 Sep 2019
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
451
2,589
0
03 Sep 2019
1