ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.12212
  4. Cited By
Polarization of Autonomous Generative AI Agents Under Echo Chambers

Polarization of Autonomous Generative AI Agents Under Echo Chambers

19 February 2024
Masaya Ohagi
    LLMAG
ArXiv (abs)PDFHTML

Papers citing "Polarization of Autonomous Generative AI Agents Under Echo Chambers"

5 / 5 papers shown
Title
Do LLMs Understand Social Knowledge? Evaluating the Sociability of Large
  Language Models with SocKET Benchmark
Do LLMs Understand Social Knowledge? Evaluating the Sociability of Large Language Models with SocKET Benchmark
Minje Choi
Jiaxin Pei
Sagar Kumar
Chang Shu
David Jurgens
ALMLLMAG
91
72
0
24 May 2023
Whose Opinions Do Language Models Reflect?
Whose Opinions Do Language Models Reflect?
Shibani Santurkar
Esin Durmus
Faisal Ladhak
Cinoo Lee
Percy Liang
Tatsunori Hashimoto
76
434
0
30 Mar 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLMALM
877
12,973
0
04 Mar 2022
SimCSE: Simple Contrastive Learning of Sentence Embeddings
SimCSE: Simple Contrastive Learning of Sentence Embeddings
Tianyu Gao
Xingcheng Yao
Danqi Chen
AILawSSL
261
3,396
0
18 Apr 2021
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language
  Models
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
Samuel Gehman
Suchin Gururangan
Maarten Sap
Yejin Choi
Noah A. Smith
158
1,209
0
24 Sep 2020
1