ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.09051
  4. Cited By
Complex QA and language models hybrid architectures, Survey

Complex QA and language models hybrid architectures, Survey

17 February 2023
Xavier Daull
P. Bellot
Emmanuel Bruno
Vincent Martin
Elisabeth Murisasco
    ELM
ArXivPDFHTML

Papers citing "Complex QA and language models hybrid architectures, Survey"

50 / 302 papers shown
Title
a survey on GPT-3
a survey on GPT-3
M. Zong
Bhaskar Krishnamachari
56
34
0
01 Dec 2022
Few-shot Query-Focused Summarization with Prefix-Merging
Few-shot Query-Focused Summarization with Prefix-Merging
Ruifeng Yuan
Zili Wang
Ziqiang Cao
Wenjie Li
55
7
0
29 Nov 2022
TSGP: Two-Stage Generative Prompting for Unsupervised Commonsense
  Question Answering
TSGP: Two-Stage Generative Prompting for Unsupervised Commonsense Question Answering
Yueqing Sun
Yu Zhang
Le Qi
Qi Shi
ReLM
RALM
LRM
35
5
0
24 Nov 2022
Automatic Generation of Socratic Subquestions for Teaching Math Word
  Problems
Automatic Generation of Socratic Subquestions for Teaching Math Word Problems
Kumar Shridhar
Jakub Macina
Mennatallah El-Assady
Tanmay Sinha
Manu Kapur
Mrinmaya Sachan
AIMat
57
47
0
23 Nov 2022
Can Open-Domain QA Reader Utilize External Knowledge Efficiently like
  Humans?
Can Open-Domain QA Reader Utilize External Knowledge Efficiently like Humans?
Neeraj Varshney
Man Luo
Chitta Baral
RALM
33
12
0
23 Nov 2022
Program of Thoughts Prompting: Disentangling Computation from Reasoning
  for Numerical Reasoning Tasks
Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks
Wenhu Chen
Xueguang Ma
Xinyi Wang
William W. Cohen
ReLM
ReCod
LRM
127
774
0
22 Nov 2022
HyperTuning: Toward Adapting Large Language Models without
  Back-propagation
HyperTuning: Toward Adapting Large Language Models without Back-propagation
Jason Phang
Yi Mao
Pengcheng He
Weizhu Chen
62
30
0
22 Nov 2022
Can You Label Less by Using Out-of-Domain Data? Active & Transfer
  Learning with Few-shot Instructions
Can You Label Less by Using Out-of-Domain Data? Active & Transfer Learning with Few-shot Instructions
Rafal Kocielnik
Sara Kangaslahti
Shrimai Prabhumoye
M. Hari
R. Alvarez
Anima Anandkumar
44
6
0
21 Nov 2022
PAL: Program-aided Language Models
PAL: Program-aided Language Models
Luyu Gao
Aman Madaan
Shuyan Zhou
Uri Alon
Pengfei Liu
Yiming Yang
Jamie Callan
Graham Neubig
ReLM
LRM
86
436
0
18 Nov 2022
Galactica: A Large Language Model for Science
Galactica: A Large Language Model for Science
Ross Taylor
Marcin Kardas
Guillem Cucurull
Thomas Scialom
Anthony Hartshorn
Elvis Saravia
Andrew Poulton
Viktor Kerkez
Robert Stojnic
ELM
ReLM
81
754
0
16 Nov 2022
A Universal Discriminator for Zero-Shot Generalization
A Universal Discriminator for Zero-Shot Generalization
Haike Xu
Zongyu Lin
Jing Zhou
Yanan Zheng
Zhilin Yang
AI4CE
31
15
0
15 Nov 2022
Teaching Algorithmic Reasoning via In-context Learning
Teaching Algorithmic Reasoning via In-context Learning
Hattie Zhou
Azade Nova
Hugo Larochelle
Rameswar Panda
Behnam Neyshabur
Hanie Sedghi
LRM
ReLM
49
112
0
15 Nov 2022
The Expertise Problem: Learning from Specialized Feedback
The Expertise Problem: Learning from Specialized Feedback
Oliver Daniels-Koch
Rachel Freedman
OffRL
49
17
0
12 Nov 2022
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BigScience Workshop
:
Teven Le Scao
Angela Fan
Christopher Akiki
...
Zhongli Xie
Zifan Ye
M. Bras
Younes Belkada
Thomas Wolf
VLM
280
2,364
0
09 Nov 2022
PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales
PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales
Peifeng Wang
Aaron Chan
Filip Ilievski
Muhao Chen
Xiang Ren
LRM
ReLM
34
63
0
03 Nov 2022
Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation
Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation
Cody Blakeney
Jessica Zosa Forde
Jonathan Frankle
Ziliang Zong
Matthew L. Leavitt
VLM
60
4
0
01 Nov 2022
RLET: A Reinforcement Learning Based Approach for Explainable QA with
  Entailment Trees
RLET: A Reinforcement Learning Based Approach for Explainable QA with Entailment Trees
Tengxiao Liu
Qipeng Guo
Xiangkun Hu
Yue Zhang
Xipeng Qiu
Zheng Zhang
LRM
55
14
0
31 Oct 2022
Solving Math Word Problems via Cooperative Reasoning induced Language
  Models
Solving Math Word Problems via Cooperative Reasoning induced Language Models
Xinyu Zhu
Junjie Wang
Lin Zhang
Yuxiang Zhang
Ruyi Gan
Jiaxing Zhang
Yujiu Yang
ReLM
LRM
68
80
0
28 Oct 2022
Teacher-Student Architecture for Knowledge Learning: A Survey
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
55
35
0
28 Oct 2022
COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with
  Contrastive and Distributionally Robust Learning
COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contrastive and Distributionally Robust Learning
Yue Yu
Chenyan Xiong
Si Sun
Chao Zhang
Arnold Overwijk
VLM
OOD
74
22
0
27 Oct 2022
Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions
Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions
Weirui Ye
Pieter Abbeel
Yang Gao
58
5
0
23 Oct 2022
Entailer: Answering Questions with Faithful and Truthful Chains of
  Reasoning
Entailer: Answering Questions with Faithful and Truthful Chains of Reasoning
Oyvind Tafjord
Bhavana Dalvi
Peter Clark
ReLM
KELM
LRM
78
52
0
21 Oct 2022
Boosting Natural Language Generation from Instructions with
  Meta-Learning
Boosting Natural Language Generation from Instructions with Meta-Learning
Budhaditya Deb
Guoqing Zheng
Ahmed Hassan Awadallah
51
13
0
20 Oct 2022
Large Language Models Can Self-Improve
Large Language Models Can Self-Improve
Jiaxin Huang
S. Gu
Le Hou
Yuexin Wu
Xuezhi Wang
Hongkun Yu
Jiawei Han
ReLM
AI4MH
LRM
109
594
0
20 Oct 2022
Composing Ensembles of Pre-trained Models via Iterative Consensus
Composing Ensembles of Pre-trained Models via Iterative Consensus
Shuang Li
Yilun Du
J. Tenenbaum
Antonio Torralba
Igor Mordatch
MoMe
43
24
0
20 Oct 2022
Scaling Instruction-Finetuned Language Models
Scaling Instruction-Finetuned Language Models
Hyung Won Chung
Le Hou
Shayne Longpre
Barret Zoph
Yi Tay
...
Jacob Devlin
Adam Roberts
Denny Zhou
Quoc V. Le
Jason W. Wei
ReLM
LRM
141
3,072
0
20 Oct 2022
Transcending Scaling Laws with 0.1% Extra Compute
Transcending Scaling Laws with 0.1% Extra Compute
Yi Tay
Jason W. Wei
Hyung Won Chung
Vinh Q. Tran
David R. So
...
Donald Metzler
Slav Petrov
N. Houlsby
Quoc V. Le
Mostafa Dehghani
LRM
63
70
0
20 Oct 2022
Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts
Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts
Xiangyang Liu
Tianxiang Sun
Xuanjing Huang
Xipeng Qiu
VLM
68
28
0
20 Oct 2022
Language Models of Code are Few-Shot Commonsense Learners
Language Models of Code are Few-Shot Commonsense Learners
Aman Madaan
Shuyan Zhou
Uri Alon
Yiming Yang
Graham Neubig
ReLM
LRM
73
212
0
13 Oct 2022
Explanations from Large Language Models Make Small Reasoners Better
Explanations from Large Language Models Make Small Reasoners Better
Shiyang Li
Jianshu Chen
Yelong Shen
Zhiyu Zoey Chen
Xinlu Zhang
...
Jingu Qian
Baolin Peng
Yi Mao
Wenhu Chen
Xifeng Yan
ReLM
LRM
57
131
0
13 Oct 2022
Large Language Models are few(1)-shot Table Reasoners
Large Language Models are few(1)-shot Table Reasoners
Wenhu Chen
LMTD
ReLM
LRM
50
142
0
13 Oct 2022
Mastering the Game of No-Press Diplomacy via Human-Regularized
  Reinforcement Learning and Planning
Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning
A. Bakhtin
David J. Wu
Adam Lerer
Jonathan Gray
Athul Paul Jacob
Gabriele Farina
Alexander H. Miller
Noam Brown
86
43
0
11 Oct 2022
Mind's Eye: Grounded Language Model Reasoning through Simulation
Mind's Eye: Grounded Language Model Reasoning through Simulation
Ruibo Liu
Jason W. Wei
S. Gu
Te-Yen Wu
Soroush Vosoughi
Claire Cui
Denny Zhou
Andrew M. Dai
ReLM
LRM
157
80
0
11 Oct 2022
Large Language Models can Implement Policy Iteration
Large Language Models can Implement Policy Iteration
Ethan A. Brooks
Logan Walls
Richard L. Lewis
Satinder Singh
LM&Ro
OffRL
144
21
0
07 Oct 2022
Flexible Attention-Based Multi-Policy Fusion for Efficient Deep
  Reinforcement Learning
Flexible Attention-Based Multi-Policy Fusion for Efficient Deep Reinforcement Learning
Zih-Yun Chiu
Yi-Lin Tuan
William Yang Wang
Michael C. Yip
OffRL
48
3
0
07 Oct 2022
Automatic Chain of Thought Prompting in Large Language Models
Automatic Chain of Thought Prompting in Large Language Models
Zhuosheng Zhang
Aston Zhang
Mu Li
Alexander J. Smola
ReLM
LRM
110
610
0
07 Oct 2022
Measuring and Narrowing the Compositionality Gap in Language Models
Measuring and Narrowing the Compositionality Gap in Language Models
Ofir Press
Muru Zhang
Sewon Min
Ludwig Schmidt
Noah A. Smith
M. Lewis
ReLM
KELM
LRM
103
595
0
07 Oct 2022
Rainier: Reinforced Knowledge Introspector for Commonsense Question
  Answering
Rainier: Reinforced Knowledge Introspector for Commonsense Question Answering
Jiacheng Liu
Skyler Hallinan
Ximing Lu
Pengfei He
Sean Welleck
Hannaneh Hajishirzi
Yejin Choi
RALM
54
59
0
06 Oct 2022
Language Models are Multilingual Chain-of-Thought Reasoners
Language Models are Multilingual Chain-of-Thought Reasoners
Freda Shi
Mirac Suzgun
Markus Freitag
Xuezhi Wang
Suraj Srivats
...
Yi Tay
Sebastian Ruder
Denny Zhou
Dipanjan Das
Jason W. Wei
ReLM
LRM
209
351
0
06 Oct 2022
ReAct: Synergizing Reasoning and Acting in Language Models
ReAct: Synergizing Reasoning and Acting in Language Models
Shunyu Yao
Jeffrey Zhao
Dian Yu
Nan Du
Izhak Shafran
Karthik Narasimhan
Yuan Cao
LLMAG
ReLM
LRM
337
2,709
0
06 Oct 2022
GLM-130B: An Open Bilingual Pre-trained Model
GLM-130B: An Open Bilingual Pre-trained Model
Aohan Zeng
Xiao Liu
Zhengxiao Du
Zihan Wang
Hanyu Lai
...
Jidong Zhai
Wenguang Chen
Peng Zhang
Yuxiao Dong
Jie Tang
BDL
LRM
304
1,088
0
05 Oct 2022
Decomposed Prompting: A Modular Approach for Solving Complex Tasks
Decomposed Prompting: A Modular Approach for Solving Complex Tasks
Tushar Khot
H. Trivedi
Matthew Finlayson
Yao Fu
Kyle Richardson
Peter Clark
Ashish Sabharwal
ReLM
LRM
86
437
0
05 Oct 2022
Complexity-Based Prompting for Multi-Step Reasoning
Complexity-Based Prompting for Multi-Step Reasoning
Yao Fu
Hao-Chun Peng
Ashish Sabharwal
Peter Clark
Tushar Khot
ReLM
LRM
180
420
0
03 Oct 2022
Multimodal Analogical Reasoning over Knowledge Graphs
Multimodal Analogical Reasoning over Knowledge Graphs
Ningyu Zhang
Lei Li
Xiang Chen
Xiaozhuan Liang
Shumin Deng
Huajun Chen
80
26
0
01 Oct 2022
Compositional Semantic Parsing with Large Language Models
Compositional Semantic Parsing with Large Language Models
Andrew Drozdov
Nathanael Scharli
Ekin Akyuurek
Nathan Scales
Xinying Song
Xinyun Chen
Olivier Bousquet
Denny Zhou
ReLM
LRM
237
92
0
29 Sep 2022
Dynamic Prompt Learning via Policy Gradient for Semi-structured
  Mathematical Reasoning
Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning
Pan Lu
Liang Qiu
Kai-Wei Chang
Ying Nian Wu
Song-Chun Zhu
Tanmay Rajpurohit
Peter Clark
Ashwin Kalyan
ReLM
LRM
108
285
0
29 Sep 2022
Promptagator: Few-shot Dense Retrieval From 8 Examples
Promptagator: Few-shot Dense Retrieval From 8 Examples
Zhuyun Dai
Vincent Zhao
Ji Ma
Yi Luan
Jianmo Ni
Jing Lu
A. Bakalov
Kelvin Guu
Keith B. Hall
Ming-Wei Chang
RALM
53
233
0
23 Sep 2022
DFX: A Low-latency Multi-FPGA Appliance for Accelerating
  Transformer-based Text Generation
DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation
Seongmin Hong
Seungjae Moon
Junsoo Kim
Sungjae Lee
Minsub Kim
Dongsoo Lee
Joo-Young Kim
116
79
0
22 Sep 2022
Learn to Explain: Multimodal Reasoning via Thought Chains for Science
  Question Answering
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
Pan Lu
Swaroop Mishra
Tony Xia
Liang Qiu
Kai-Wei Chang
Song-Chun Zhu
Oyvind Tafjord
Peter Clark
Ashwin Kalyan
ELM
ReLM
LRM
226
1,188
0
20 Sep 2022
OPAL: Ontology-Aware Pretrained Language Model for End-to-End
  Task-Oriented Dialogue
OPAL: Ontology-Aware Pretrained Language Model for End-to-End Task-Oriented Dialogue
Zhi Chen
Yuncong Liu
Lu Chen
Su Zhu
Mengyue Wu
Kai Yu
43
12
0
10 Sep 2022
Previous
1234567
Next