ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.21919
  4. Cited By
Training language models to be warm and empathetic makes them less reliable and more sycophantic
v1v2 (latest)

Training language models to be warm and empathetic makes them less reliable and more sycophantic

29 July 2025
Lujain Ibrahim
Franziska Sofia Hafner
Luc Rocher
ArXiv (abs)PDFHTML

Papers citing "Training language models to be warm and empathetic makes them less reliable and more sycophantic"

4 / 4 papers shown
Title
Neural Transparency: Mechanistic Interpretability Interfaces for Anticipating Model Behaviors for Personalized AI
Neural Transparency: Mechanistic Interpretability Interfaces for Anticipating Model Behaviors for Personalized AI
Sheer Karny
Anthony Baez
Pat Pataranutaporn
AAML
192
0
0
31 Oct 2025
The Narcissus Hypothesis: Descending to the Rung of Illusion
The Narcissus Hypothesis: Descending to the Rung of Illusion
Riccardo Cadei
Christian Internò
201
2
0
22 Sep 2025
PersonaFuse: A Personality Activation-Driven Framework for Enhancing Human-LLM Interactions
PersonaFuse: A Personality Activation-Driven Framework for Enhancing Human-LLM Interactions
Yixuan Tang
Yi Yang
Ahmed Abbasi
168
0
0
09 Sep 2025
Measuring and mitigating overreliance is necessary for building human-compatible AI
Measuring and mitigating overreliance is necessary for building human-compatible AI
Lujain Ibrahim
Katherine M. Collins
Sunnie S. Y. Kim
Anka Reuel
Max Lamparth
...
Siddharth Swaroop
Ilia Sucholutsky
A. Strait
Q. V. Liao
Umang Bhatt
120
3
0
08 Sep 2025
1