ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.02524
  4. Cited By
Do You Trust ChatGPT? -- Perceived Credibility of Human and AI-Generated
  Content

Do You Trust ChatGPT? -- Perceived Credibility of Human and AI-Generated Content

5 September 2023
Martin Huschens
Martin Briesch
Dominik Sobania
Franz Rothlauf
ArXivPDFHTML

Papers citing "Do You Trust ChatGPT? -- Perceived Credibility of Human and AI-Generated Content"

8 / 8 papers shown
Title
Mind the Gap! Choice Independence in Using Multilingual LLMs for Persuasive Co-Writing Tasks in Different Languages
Mind the Gap! Choice Independence in Using Multilingual LLMs for Persuasive Co-Writing Tasks in Different Languages
Shreyan Biswas
Alexander Erlei
U. Gadiraju
110
4
0
13 Feb 2025
Enhancing Ground-to-Aerial Image Matching for Visual Misinformation Detection Using Semantic Segmentation
Enhancing Ground-to-Aerial Image Matching for Visual Misinformation Detection Using Semantic Segmentation
Emanuele Mule
Matteo Pannacci
Ali Ghasemi Goudarzi
Francesco Pro
Lorenzo Papa
Luca Maiano
Irene Amerini
81
0
0
10 Feb 2025
Private Yet Social: How LLM Chatbots Support and Challenge Eating
  Disorder Recovery
Private Yet Social: How LLM Chatbots Support and Challenge Eating Disorder Recovery
Ryuhaerang Choi
Taehan Kim
Subin Park
Jennifer G Kim
Sung-Ju Lee
AI4MH
81
1
0
16 Dec 2024
Believing Anthropomorphism: Examining the Role of Anthropomorphic Cues
  on Trust in Large Language Models
Believing Anthropomorphism: Examining the Role of Anthropomorphic Cues on Trust in Large Language Models
Michelle Cohn
Mahima Pushkarna
Gbolahan O. Olanubi
Joseph M. Moran
Daniel Padgett
Zion Mengesha
Courtney Heldreth
25
17
0
09 May 2024
Towards Theoretical Understandings of Self-Consuming Generative Models
Towards Theoretical Understandings of Self-Consuming Generative Models
Shi Fu
Sen Zhang
Yingjie Wang
Xinmei Tian
Dacheng Tao
46
9
0
19 Feb 2024
Recursive Chain-of-Feedback Prevents Performance Degradation from
  Redundant Prompting
Recursive Chain-of-Feedback Prevents Performance Degradation from Redundant Prompting
Jinwoo Ahn
Kyuseung Shin
ReLM
LRM
AI4CE
18
1
0
05 Feb 2024
Large Language Models Suffer From Their Own Output: An Analysis of the
  Self-Consuming Training Loop
Large Language Models Suffer From Their Own Output: An Analysis of the Self-Consuming Training Loop
Martin Briesch
Dominik Sobania
Franz Rothlauf
40
56
0
28 Nov 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
384
12,081
0
04 Mar 2022
1