ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.01336
  4. Cited By

Aligning Large Language Models for Faithful Integrity Against Opposing Argument

3 January 2025
Yong Zhao
Yang Deng
See-Kiong Ng
Tat-Seng Chua
ArXiv (abs)PDFHTML

Papers citing "Aligning Large Language Models for Faithful Integrity Against Opposing Argument"

2 / 2 papers shown
Title
Reasoning Models Are More Easily Gaslighted Than You Think
Reasoning Models Are More Easily Gaslighted Than You Think
B. Zhu
Hailong Yin
Jingjing Chen
Yu Jiang
LRM
80
0
0
11 Jun 2025
Calling a Spade a Heart: Gaslighting Multimodal Large Language Models via Negation
Calling a Spade a Heart: Gaslighting Multimodal Large Language Models via Negation
Bin Zhu
Hui yan Qi
Yinxuan Gui
Jingjing Chen
Chong-Wah Ngo
Ee-Peng Lim
483
2
0
31 Jan 2025
1