Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2505.12381
Cited By
From n-gram to Attention: How Model Architectures Learn and Propagate Bias in Language Modeling
18 May 2025
Mohsinul Kabir
Tasfia Tahsin
Sophia Ananiadou
KELM
AI4CE
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"From n-gram to Attention: How Model Architectures Learn and Propagate Bias in Language Modeling"
4 / 4 papers shown
Title
Religious Bias Landscape in Language and Text-to-Image Models: Analysis, Detection, and Debiasing Strategies
Ajwad Abrar
Nafisa Tabassum Oeshy
Mohsinul Kabir
Sophia Ananiadou
VLM
90
1
0
14 Jan 2025
On the Representational Capacity of Neural Language Models with Chain-of-Thought Reasoning
Franz Nowak
Anej Svete
Alexandra Butoi
Ryan Cotterell
ReLM
LRM
115
17
0
20 Jun 2024
What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages
Nadav Borenstein
Anej Svete
R. Chan
Josef Valvoda
Franz Nowak
Isabelle Augenstein
Eleanor Chodroff
Ryan Cotterell
116
13
0
06 Jun 2024
ExpertPrompting: Instructing Large Language Models to be Distinguished Experts
Benfeng Xu
An Yang
Junyang Lin
Quang Wang
Chang Zhou
Yongdong Zhang
Zhendong Mao
ALM
124
142
0
24 May 2023
1