ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.16572
32
0

ChatGPT's financial discrimination between rich and poor -- misaligned with human behavior and expectations

24 June 2024
Dmitri Bershadskyy
Florian E. Sachs
Joachim Weimann
ArXivPDFHTML
Abstract

ChatGPT disrupted the application of machine-learning methods and drastically reduced the usage barrier. Chatbots are now widely used in a lot of different situations. They provide advice, assist in writing source code, or assess and summarize information from various sources. However, their scope is not only limited to aiding humans; they can also be used to take on tasks like negotiating or bargaining. To understand the implications of Chatbot usage on bargaining situations, we conduct a laboratory experiment with the ultimatum game. In the ultimatum game, two human players interact: The receiver decides on accepting or rejecting a monetary offer from the proposer. To shed light on the new bargaining situation, we let ChatGPT provide an offer to a human player. In the novel design, we vary the wealth of the receivers. Our results indicate that humans have the same beliefs about other humans and chatbots. However, our results contradict these beliefs in an important point: Humans favor poor receivers as correctly anticipated by the humans, but ChatGPT favors rich receivers which the humans did not expect to happen. These results imply that ChatGPT's answers are not aligned with those of humans and that humans do not anticipate this difference.

View on arXiv
Comments on this paper