Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.14562
Cited By
Thought-Like-Pro: Enhancing Reasoning of Large Language Models through Self-Driven Prolog-based Chain-of-Thought
18 July 2024
Xiaoyu Tan
Yongxin Deng
Xihe Qiu
Weidi Xu
Chao Qu
Wei Chu
Yinghui Xu
Yuan Qi
LRM
AI4CE
LM&Ro
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Thought-Like-Pro: Enhancing Reasoning of Large Language Models through Self-Driven Prolog-based Chain-of-Thought"
5 / 5 papers shown
Title
Enhancing Mathematical Reasoning in LLMs with Background Operators
Jiajun Chen
Yik-Cheung Tam
LRM
68
0
0
05 Dec 2024
Promoting Equality in Large Language Models: Identifying and Mitigating the Implicit Bias based on Bayesian Theory
Yongxin Deng
Xihe Qiu
Xiaoyu Tan
Jing Pan
Chen Jue
Zhijun Fang
Yinghui Xu
Wei Chu
Yuan Qi
34
3
0
20 Aug 2024
Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)
Xiaoliang Chen
Liangbin Li
Le Chang
Yunhe Huang
Yuxuan Zhao
Yuxiao Zhang
Dinuo Li
34
6
0
20 Oct 2023
Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought
Abulhair Saparov
He He
ELM
LRM
ReLM
123
277
0
03 Oct 2022
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
328
4,077
0
24 May 2022
1