36
0
v1v2 (latest)

MultiHoax: A Dataset of Multi-hop False-Premise Questions

Main:8 Pages
2 Figures
Bibliography:4 Pages
22 Tables
Appendix:7 Pages
Abstract

As Large Language Models are increasingly deployed in high-stakes domains, their ability to detect false assumptions and reason critically is crucial for ensuring reliable outputs. False-premise questions (FPQs) serve as an important evaluation method by exposing cases where flawed assumptions lead to incorrect responses. While existing benchmarks focus on single-hop FPQs, real-world reasoning often requires multi-hop inference, where models must verify consistency across multiple reasoning steps rather than relying on surface-level cues. To address this gap, we introduce MultiHoax, a benchmark for evaluating LLMs' ability to handle false premises in complex, multi-step reasoning tasks. Our dataset spans seven countries and ten diverse knowledge categories, using Wikipedia as the primary knowledge source to enable factual reasoning across regions. Experiments reveal that state-of-the-art LLMs struggle to detect false premises across different countries, knowledge categories, and multi-hop reasoning types, highlighting the need for improved false premise detection and more robust multi-hop reasoning capabilities in LLMs.

View on arXiv
@article{shafiei2025_2506.00264,
  title={ MultiHoax: A Dataset of Multi-hop False-Premise Questions },
  author={ Mohammadamin Shafiei and Hamidreza Saffari and Nafise Sadat Moosavi },
  journal={arXiv preprint arXiv:2506.00264},
  year={ 2025 }
}
Comments on this paper