Policy-as-Prompt: Rethinking Content Moderation in the Age of Large Language Models

Content moderation plays a critical role in shaping safe and inclusive online environments, balancing platform standards, user expectations, and regulatory frameworks. Traditionally, this process involves operationalising policies into guidelines, which are then used by downstream human moderators for enforcement, or to further annotate datasets for training machine learning moderation models. However, recent advancements in large language models (LLMs) are transforming this landscape. These models can now interpret policies directly as textual inputs, eliminating the need for extensive data curation. This approach offers unprecedented flexibility, as moderation can be dynamically adjusted through natural language interactions. This paradigm shift raises important questions about how policies are operationalised and the implications for content moderation practices. In this paper, we formalise the emerging policy-as-prompt framework and identify five key challenges across four domains: Technical Implementation (1. translating policy to prompts, 2. sensitivity to prompt structure and formatting), Sociotechnical (3. the risk of technological determinism in policy formation), Organisational (4. evolving roles between policy and machine learning teams), and Governance (5. model governance and accountability). Through analysing these challenges across technical, sociotechnical, organisational, and governance dimensions, we discuss potential mitigation approaches. This research provides actionable insights for practitioners and lays the groundwork for future exploration of scalable and adaptive content moderation systems in digital ecosystems.
View on arXiv@article{palla2025_2502.18695, title={ Policy-as-Prompt: Rethinking Content Moderation in the Age of Large Language Models }, author={ Konstantina Palla and José Luis Redondo García and Claudia Hauff and Francesco Fabbri and Henrik Lindström and Daniel R. Taber and Andreas Damianou and Mounia Lalmas }, journal={arXiv preprint arXiv:2502.18695}, year={ 2025 } }