ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.11924
  4. Cited By
AI model GPT-3 (dis)informs us better than humans

AI model GPT-3 (dis)informs us better than humans

23 January 2023
Giovanni Spitale
Nikola Biller-Andorno
Federico Germani
    DeLMO
ArXivPDFHTML

Papers citing "AI model GPT-3 (dis)informs us better than humans"

48 / 48 papers shown
Title
LLM-Generated Fake News Induces Truth Decay in News Ecosystem: A Case Study on Neural News Recommendation
LLM-Generated Fake News Induces Truth Decay in News Ecosystem: A Case Study on Neural News Recommendation
Beizhe Hu
Qiang Sheng
Juan Cao
Yang Li
Danding Wang
145
0
0
28 Apr 2025
Labeling Messages as AI-Generated Does Not Reduce Their Persuasive Effects
Labeling Messages as AI-Generated Does Not Reduce Their Persuasive Effects
Isabel O. Gallegos
Chen Shani
Weiyan Shi
Federico Bianchi
Izzy Gainsburg
Dan Jurafsky
Robb Willer
20
1
0
14 Apr 2025
Increasing happiness through conversations with artificial intelligence
Increasing happiness through conversations with artificial intelligence
Joseph Heffner
Chongyu Qin
Martin Chadwick
Chris Knutsen
Christopher Summerfield
Zeb Kurth-Nelson
Robb B. Rutledge
AI4MH
42
0
0
02 Apr 2025
Echoes of Power: Investigating Geopolitical Bias in US and China Large Language Models
Echoes of Power: Investigating Geopolitical Bias in US and China Large Language Models
Andre G. C. Pacheco
Athus Cavalini
Giovanni Comarela
38
1
0
20 Mar 2025
Scaling Trends in Language Model Robustness
Scaling Trends in Language Model Robustness
Nikolhaus Howe
Michal Zajac
I. R. McKenzie
Oskar Hollinsworth
Tom Tseng
Aaron David Tucker
Pierre-Luc Bacon
Adam Gleave
109
2
0
21 Feb 2025
Lies, Damned Lies, and Distributional Language Statistics: Persuasion
  and Deception with Large Language Models
Lies, Damned Lies, and Distributional Language Statistics: Persuasion and Deception with Large Language Models
Cameron R. Jones
Benjamin Bergen
67
5
0
22 Dec 2024
Evaluating the Performance of Large Language Models in Scientific Claim
  Detection and Classification
Evaluating the Performance of Large Language Models in Scientific Claim Detection and Classification
Tanjim Bin Faruk
71
0
0
21 Dec 2024
Persuasion with Large Language Models: a Survey
Persuasion with Large Language Models: a Survey
Alexander Rogiers
Sander Noels
Maarten Buyl
Tijl De Bie
34
6
0
11 Nov 2024
Using GPT Models for Qualitative and Quantitative News Analytics in the
  2024 US Presidental Election Process
Using GPT Models for Qualitative and Quantitative News Analytics in the 2024 US Presidental Election Process
Bohdan M. Pavlyshenko
29
0
0
21 Oct 2024
How will advanced AI systems impact democracy?
How will advanced AI systems impact democracy?
Christopher Summerfield
Lisa Argyle
Michiel Bakker
Teddy Collins
Esin Durmus
...
Elizabeth Seger
Divya Siddarth
Henrik Skaug Sætra
MH Tessler
M. Botvinick
45
2
0
27 Aug 2024
AI-Driven Review Systems: Evaluating LLMs in Scalable and Bias-Aware
  Academic Reviews
AI-Driven Review Systems: Evaluating LLMs in Scalable and Bias-Aware Academic Reviews
Keith Tyser
Ben Segev
Gaston Longhitano
Xin-Yu Zhang
Zachary Meeks
...
Nicholas Belsten
A. Shporer
Madeleine Udell
Dov Te’eni
Iddo Drori
43
13
0
19 Aug 2024
Large language models can consistently generate high-quality content for
  election disinformation operations
Large language models can consistently generate high-quality content for election disinformation operations
Angus R. Williams
Liam Burke-Moore
Ryan Sze-Yin Chan
Florence E. Enock
Federico Nanni
Tvesha Sippy
Yi-Ling Chung
Evelina Gabasova
Kobi Hackenburg
Jonathan Bright
31
4
0
13 Aug 2024
MetaSumPerceiver: Multimodal Multi-Document Evidence Summarization for
  Fact-Checking
MetaSumPerceiver: Multimodal Multi-Document Evidence Summarization for Fact-Checking
Ting-Chih Chen
Chia-Wei Tang
Chris Thomas
46
3
0
18 Jul 2024
When LLMs Play the Telephone Game: Cumulative Changes and Attractors in
  Iterated Cultural Transmissions
When LLMs Play the Telephone Game: Cumulative Changes and Attractors in Iterated Cultural Transmissions
Jérémy Perez
Corentin Léger
Grgur Kovač
Cédric Colas
Gaia Molinaro
Maxime Derex
Pierre-Yves Oudeyer
Clément Moulin-Frier
38
6
0
05 Jul 2024
Catching Chameleons: Detecting Evolving Disinformation Generated using
  Large Language Models
Catching Chameleons: Detecting Evolving Disinformation Generated using Large Language Models
Bohan Jiang
Chengshuai Zhao
Zhen Tan
Huan Liu
36
2
0
26 Jun 2024
Investigating the Influence of Prompt-Specific Shortcuts in AI Generated
  Text Detection
Investigating the Influence of Prompt-Specific Shortcuts in AI Generated Text Detection
Choonghyun Park
Hyuhng Joon Kim
Junyeob Kim
Youna Kim
Taeuk Kim
Hyunsoo Cho
Hwiyeol Jo
Sang-goo Lee
Kang Min Yoo
AAML
46
1
0
24 Jun 2024
Detecting AI-Generated Text: Factors Influencing Detectability with Current Methods
Detecting AI-Generated Text: Factors Influencing Detectability with Current Methods
Kathleen C. Fraser
Hillary Dawkins
S. Kiritchenko
DeLMO
79
7
0
21 Jun 2024
PRISM: A Design Framework for Open-Source Foundation Model Safety
PRISM: A Design Framework for Open-Source Foundation Model Safety
Terrence Neumann
Bryan Jones
42
1
0
14 Jun 2024
RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text
  Detectors
RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors
Liam Dugan
Alyssa Hwang
Filip Trhlik
Josh Magnus Ludan
Andrew Zhu
Hainiu Xu
Daphne Ippolito
Christopher Callison-Burch
DeLMO
AAML
35
41
0
13 May 2024
Fake Artificial Intelligence Generated Contents (FAIGC): A Survey of
  Theories, Detection Methods, and Opportunities
Fake Artificial Intelligence Generated Contents (FAIGC): A Survey of Theories, Detection Methods, and Opportunities
Xiaomin Yu
Yezhaohui Wang
Yanfang Chen
Zhen Tao
Dinghao Xi
Shichao Song
Simin Niu
Zhiyu Li
67
8
0
25 Apr 2024
Autonomous LLM-driven research from data to human-verifiable research
  papers
Autonomous LLM-driven research from data to human-verifiable research papers
Tal Ifargan
Lukas Hafner
Maor Kern
Ori Alcalay
Roy Kishony
37
15
0
24 Apr 2024
Fakes of Varying Shades: How Warning Affects Human Perception and
  Engagement Regarding LLM Hallucinations
Fakes of Varying Shades: How Warning Affects Human Perception and Engagement Regarding LLM Hallucinations
Mahjabin Nahar
Haeseung Seo
Eun-Ju Lee
Aiping Xiong
Dongwon Lee
HILM
31
11
0
04 Apr 2024
Knowledge Conflicts for LLMs: A Survey
Knowledge Conflicts for LLMs: A Survey
Rongwu Xu
Zehan Qi
Zhijiang Guo
Cunxiang Wang
Hongru Wang
Yue Zhang
Wei Xu
198
94
0
13 Mar 2024
A Survey of AI-generated Text Forensic Systems: Detection, Attribution,
  and Characterization
A Survey of AI-generated Text Forensic Systems: Detection, Attribution, and Characterization
Tharindu Kumarage
Garima Agrawal
Paras Sheth
Raha Moraffah
Amanat Chadha
Joshua Garland
Huan Liu
DeLMO
36
11
0
02 Mar 2024
I Am Not Them: Fluid Identities and Persistent Out-group Bias in Large
  Language Models
I Am Not Them: Fluid Identities and Persistent Out-group Bias in Large Language Models
Wenchao Dong
Assem Zhunis
Hyojin Chin
Jiyoung Han
Meeyoung Cha
30
2
0
16 Feb 2024
Lying Blindly: Bypassing ChatGPT's Safeguards to Generate Hard-to-Detect
  Disinformation Claims at Scale
Lying Blindly: Bypassing ChatGPT's Safeguards to Generate Hard-to-Detect Disinformation Claims at Scale
Freddy Heppell
M. Bakir
Kalina Bontcheva
DeLMO
33
1
0
13 Feb 2024
Exploiting Novel GPT-4 APIs
Exploiting Novel GPT-4 APIs
Kellin Pelrine
Mohammad Taufeeque
Michal Zajkac
Euan McLean
Adam Gleave
SILM
23
20
0
21 Dec 2023
ChatGPT as a commenter to the news: can LLMs generate human-like
  opinions?
ChatGPT as a commenter to the news: can LLMs generate human-like opinions?
Rayden Tseng
Suzan Verberne
P. V. D. Putten
ALM
DeLMO
LLMAG
12
6
0
21 Dec 2023
In Generative AI we Trust: Can Chatbots Effectively Verify Political
  Information?
In Generative AI we Trust: Can Chatbots Effectively Verify Political Information?
Elizaveta Kuznetsova
M. Makhortykh
Victoria Vziatysheva
Martha Stolze
Ani Baghumyan
Aleksandra Urman
22
2
0
20 Dec 2023
On a Functional Definition of Intelligence
On a Functional Definition of Intelligence
Warisa Sritriratanarak
Paulo Garcia
11
0
0
15 Dec 2023
The Earth is Flat because...: Investigating LLMs' Belief towards
  Misinformation via Persuasive Conversation
The Earth is Flat because...: Investigating LLMs' Belief towards Misinformation via Persuasive Conversation
Rongwu Xu
Brian S. Lin
Shujian Yang
Tianqi Zhang
Weiyan Shi
Lei Bai
Zhixuan Fang
Wei Xu
Han Qiu
44
51
0
14 Dec 2023
Disentangling Perceptions of Offensiveness: Cultural and Moral
  Correlates
Disentangling Perceptions of Offensiveness: Cultural and Moral Correlates
Aida Mostafazadeh Davani
Mark Díaz
Dylan K. Baker
Vinodkumar Prabhakaran
AAML
25
14
0
11 Dec 2023
Invisible Relevance Bias: Text-Image Retrieval Models Prefer
  AI-Generated Images
Invisible Relevance Bias: Text-Image Retrieval Models Prefer AI-Generated Images
Shicheng Xu
Danyang Hou
Liang Pang
Jingcheng Deng
Jun Xu
Huawei Shen
Xueqi Cheng
16
8
0
23 Nov 2023
What Lies Beneath? Exploring the Impact of Underlying AI Model Updates
  in AI-Infused Systems
What Lies Beneath? Exploring the Impact of Underlying AI Model Updates in AI-Infused Systems
Vikram Mohanty
Jude Lim
Kurt Luther
24
0
0
17 Nov 2023
Adapting Fake News Detection to the Era of Large Language Models
Adapting Fake News Detection to the Era of Large Language Models
Jinyan Su
Claire Cardie
Preslav Nakov
DeLMO
37
18
0
02 Nov 2023
LLMs may Dominate Information Access: Neural Retrievers are Biased
  Towards LLM-Generated Texts
LLMs may Dominate Information Access: Neural Retrievers are Biased Towards LLM-Generated Texts
Sunhao Dai
Yuqi Zhou
Liang Pang
Weihao Liu
Xiaolin Hu
Yong Liu
Xiao Zhang
Gang Wang
Jun Xu
44
27
0
31 Oct 2023
Fighting Fire with Fire: The Dual Role of LLMs in Crafting and Detecting
  Elusive Disinformation
Fighting Fire with Fire: The Dual Role of LLMs in Crafting and Detecting Elusive Disinformation
Jason Samuel Lucas
Adaku Uchendu
Michiharu Yamashita
Jooyoung Lee
Shaurya Rohatgi
Dongwon Lee
24
42
0
24 Oct 2023
Disinformation Detection: An Evolving Challenge in the Age of LLMs
Disinformation Detection: An Evolving Challenge in the Age of LLMs
Qinglong Cao
Yuntian Chen
Ayushi Nirmal
Xiaokang Yang
DeLMO
45
51
0
25 Sep 2023
Can LLM-Generated Misinformation Be Detected?
Can LLM-Generated Misinformation Be Detected?
Canyu Chen
Kai Shu
DeLMO
33
158
0
25 Sep 2023
Overview of AuTexTification at IberLEF 2023: Detection and Attribution
  of Machine-Generated Text in Multiple Domains
Overview of AuTexTification at IberLEF 2023: Detection and Attribution of Machine-Generated Text in Multiple Domains
A. Sarvazyan
José Ángel González
Marc Franco-Salvador
Francisco Rangel
Berta Chulvi
Paolo Rosso
DeLMO
35
60
0
20 Sep 2023
Generative AI
Generative AI
Stefan Feuerriegel
Jochen Hartmann
Christian Janiesch
Patrick Zschech
42
546
0
13 Sep 2023
Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and
  Vulnerabilities
Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and Vulnerabilities
Maximilian Mozes
Xuanli He
Bennett Kleinberg
Lewis D. Griffin
39
78
0
24 Aug 2023
Anatomy of an AI-powered malicious social botnet
Anatomy of an AI-powered malicious social botnet
Kai-Cheng Yang
Filippo Menczer
DeLMO
43
67
0
30 Jul 2023
Generative Pre-trained Transformer: A Comprehensive Review on Enabling
  Technologies, Potential Applications, Emerging Challenges, and Future
  Directions
Generative Pre-trained Transformer: A Comprehensive Review on Enabling Technologies, Potential Applications, Emerging Challenges, and Future Directions
Gokul Yenduri
M. Ramalingam
G. C. Selvi
Y. Supriya
Gautam Srivastava
...
Rutvij H. Jhaveri
B. Prabadevi
Weizheng Wang
Athanasios V. Vasilakos
Thippa Reddy Gadekallu
AI4CE
LM&MA
20
166
0
11 May 2023
Taking Advice from ChatGPT
Taking Advice from ChatGPT
Peter Zhang
37
5
0
11 May 2023
Language Model Behavior: A Comprehensive Survey
Language Model Behavior: A Comprehensive Survey
Tyler A. Chang
Benjamin Bergen
VLM
LRM
LM&MA
27
103
0
20 Mar 2023
MiDe22: An Annotated Multi-Event Tweet Dataset for Misinformation
  Detection
MiDe22: An Annotated Multi-Event Tweet Dataset for Misinformation Detection
Cagri Toraman
Oguzhan Ozcelik
Furkan Şahinuç
Fazli Can
32
12
0
11 Oct 2022
Tortured phrases: A dubious writing style emerging in science. Evidence
  of critical issues affecting established journals
Tortured phrases: A dubious writing style emerging in science. Evidence of critical issues affecting established journals
G. Cabanac
C. Labbé
A. Magazinov
DeLMO
45
80
0
12 Jul 2021
1