ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.09630
  4. Cited By
Decoding Susceptibility: Modeling Misbelief to Misinformation Through a
  Computational Approach
v1v2 (latest)

Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach

16 November 2023
Yanchen Liu
Mingyu Derek Ma
Wenna Qin
Azure Zhou
Jiaao Chen
Weiyan Shi
Wei Wang
Diyi Yang
ArXiv (abs)PDFHTML

Papers citing "Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach"

6 / 6 papers shown
Title
MIDDAG: Where Does Our News Go? Investigating Information Diffusion via
  Community-Level Information Pathways
MIDDAG: Where Does Our News Go? Investigating Information Diffusion via Community-Level Information Pathways
Mingyu Derek Ma
Alexander K. Taylor
Nuan Wen
Yanchen Liu
Po-Nien Kung
...
Azure Zhou
Diyi Yang
Xuezhe Ma
Nanyun Peng
Wei Wang
31
2
0
04 Oct 2023
Exploring Large Language Models for Communication Games: An Empirical
  Study on Werewolf
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Yuzhuang Xu
Shuo Wang
Peng Li
Ziyue Wang
Xiaolong Wang
Weidong Liu
Yang Liu
LLMAG
43
204
0
09 Sep 2023
Semantic-Oriented Unlabeled Priming for Large-Scale Language Models
Semantic-Oriented Unlabeled Priming for Large-Scale Language Models
Yanchen Liu
Timo Schick
Hinrich Schütze
VLM
61
15
0
12 Feb 2022
What Makes Good In-Context Examples for GPT-$3$?
What Makes Good In-Context Examples for GPT-333?
Jiachang Liu
Dinghan Shen
Yizhe Zhang
Bill Dolan
Lawrence Carin
Weizhu Chen
AAMLRALM
385
1,379
0
17 Jan 2021
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Nils Reimers
Iryna Gurevych
1.3K
12,226
0
27 Aug 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
662
24,464
0
26 Jul 2019
1