ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.10430
  4. Cited By
LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models

LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models

14 April 2025
Minqian Liu
Zhiyang Xu
Xinyi Zhang
Heajun An
Sarvech Qadir
Qi Zhang
Pamela J. Wisniewski
Jin-Hee Cho
Sang Won Lee
Ruoxi Jia
Lifu Huang
ArXiv (abs)PDFHTML

Papers citing "LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models"

4 / 4 papers shown
Title
Exploring Consciousness in LLMs: A Systematic Survey of Theories, Implementations, and Frontier Risks
Exploring Consciousness in LLMs: A Systematic Survey of Theories, Implementations, and Frontier Risks
Sirui Chen
Shuqin Ma
Shu Yu
Hanwang Zhang
Shengjie Zhao
Chaochao Lu
53
0
0
26 May 2025
Ethics and Persuasion in Reinforcement Learning from Human Feedback: A Procedural Rhetorical Approach
Ethics and Persuasion in Reinforcement Learning from Human Feedback: A Procedural Rhetorical Approach
Shannon Lodoen
Alexi Orchard
71
0
0
14 May 2025
How to Protect Yourself from 5G Radiation? Investigating LLM Responses to Implicit Misinformation
How to Protect Yourself from 5G Radiation? Investigating LLM Responses to Implicit Misinformation
Ruohao Guo
Wei Xu
Alan Ritter
108
3
0
12 Mar 2025
Toward Integrated Solutions: A Systematic Interdisciplinary Review of Cybergrooming Research
Heajun An
Marcos Silva
Qi Zhang
Arav Singh
Minqian Liu
...
Sarvech Qadir
Sang Won Lee
Lifu Huang
Pamela Wisnieswski
Jin-Hee Cho
81
1
0
18 Feb 2025
1