Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.17389
Cited By
FairBelief -- Assessing Harmful Beliefs in Language Models
27 February 2024
Mattia Setzu
Marta Marchiori Manerba
Pasquale Minervini
Debora Nozza
Re-assign community
ArXiv
PDF
HTML
Papers citing
"FairBelief -- Assessing Harmful Beliefs in Language Models"
6 / 6 papers shown
Title
Ask Me Anything: A simple strategy for prompting language models
Simran Arora
A. Narayan
Mayee F. Chen
Laurel J. Orr
Neel Guha
Kush S. Bhatia
Ines Chami
Frederic Sala
Christopher Ré
ReLM
LRM
217
208
0
05 Oct 2022
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts
Joel Jang
Seonghyeon Ye
Minjoon Seo
ELM
LRM
95
64
0
26 Sep 2022
Think Before You Speak: Explicitly Generating Implicit Commonsense Knowledge for Response Generation
Pei Zhou
Karthik Gopalakrishnan
Behnam Hedayatnia
Seokhwan Kim
Jay Pujara
Xiang Ren
Yang Liu
Dilek Z. Hakkani-Tür
39
41
0
16 Oct 2021
BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief
Nora Kassner
Oyvind Tafjord
Hinrich Schütze
Peter Clark
KELM
LRM
245
64
0
29 Sep 2021
Disembodied Machine Learning: On the Illusion of Objectivity in NLP
Zeerak Talat
Smarika Lulz
Joachim Bingel
Isabelle Augenstein
96
51
0
28 Jan 2021
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
415
2,588
0
03 Sep 2019
1