ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.05977
  4. Cited By
Exploring Human-LLM Conversations: Mental Models and the Originator of
  Toxicity

Exploring Human-LLM Conversations: Mental Models and the Originator of Toxicity

8 July 2024
Johannes Schneider
Arianna Casanova Flores
Anne-Catherine Kranz
ArXiv (abs)PDFHTML

Papers citing "Exploring Human-LLM Conversations: Mental Models and the Originator of Toxicity"

2 / 2 papers shown
Title
Consistency of Responses and Continuations Generated by Large Language Models on Social Media
Consistency of Responses and Continuations Generated by Large Language Models on Social Media
Wenlu Fan
Yinlin Zhu
Chenyang Wang
Bin Wang
Wentao Xu
136
1
0
14 Jan 2025
Acceptable Use Policies for Foundation Models
Acceptable Use Policies for Foundation Models
Kevin Klyman
69
17
0
29 Aug 2024
1