Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2504.10430
Cited By
LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models
14 April 2025
Minqian Liu
Zhiyang Xu
Xinyi Zhang
Heajun An
Sarvech Qadir
Qi Zhang
Pamela J. Wisniewski
Jin-Hee Cho
Sang Won Lee
Ruoxi Jia
Lifu Huang
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models"
4 / 4 papers shown
Title
Exploring Consciousness in LLMs: A Systematic Survey of Theories, Implementations, and Frontier Risks
Sirui Chen
Shuqin Ma
Shu Yu
Hanwang Zhang
Shengjie Zhao
Chaochao Lu
53
0
0
26 May 2025
Ethics and Persuasion in Reinforcement Learning from Human Feedback: A Procedural Rhetorical Approach
Shannon Lodoen
Alexi Orchard
71
0
0
14 May 2025
How to Protect Yourself from 5G Radiation? Investigating LLM Responses to Implicit Misinformation
Ruohao Guo
Wei Xu
Alan Ritter
108
3
0
12 Mar 2025
Toward Integrated Solutions: A Systematic Interdisciplinary Review of Cybergrooming Research
Heajun An
Marcos Silva
Qi Zhang
Arav Singh
Minqian Liu
...
Sarvech Qadir
Sang Won Lee
Lifu Huang
Pamela Wisnieswski
Jin-Hee Cho
81
1
0
18 Feb 2025
1