ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.13929
  4. Cited By
Large Language Models are Skeptics: False Negative Problem of
  Input-conflicting Hallucination

Large Language Models are Skeptics: False Negative Problem of Input-conflicting Hallucination

20 June 2024
Jongyoon Song
Sangwon Yu
Sungroh Yoon
    HILM
ArXivPDFHTML

Papers citing "Large Language Models are Skeptics: False Negative Problem of Input-conflicting Hallucination"

4 / 4 papers shown
Title
From Misleading Queries to Accurate Answers: A Three-Stage Fine-Tuning Method for LLMs
From Misleading Queries to Accurate Answers: A Three-Stage Fine-Tuning Method for LLMs
Guocong Li
Weize Liu
Yihang Wu
Ping Wang
Shuaihan Huang
Hongxia Xu
Jian Wu
KELM
HILM
50
0
0
15 Apr 2025
Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large
  Language Models in Knowledge Conflicts
Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts
Jian Xie
Kai Zhang
Jiangjie Chen
Renze Lou
Yu-Chuan Su
RALM
214
155
0
22 May 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
319
11,953
0
04 Mar 2022
Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit
  Reasoning Strategies
Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies
Mor Geva
Daniel Khashabi
Elad Segal
Tushar Khot
Dan Roth
Jonathan Berant
RALM
250
677
0
06 Jan 2021
1