Aligning Trustworthy AI with Democracy: A Dual Taxonomy of Opportunities and Risks

Artificial Intelligence (AI) poses both significant risks and valuable opportunities for democratic governance. This paper introduces a dual taxonomy to evaluate AI's complex relationship with democracy: the AI Risks to Democracy (AIRD) taxonomy, which identifies how AI can undermine core democratic principles such as autonomy, fairness, and trust; and the AI's Positive Contributions to Democracy (AIPD) taxonomy, which highlights AI's potential to enhance transparency, participation, efficiency, and evidence-based policymaking.Grounded in the European Union's approach to ethical AI governance, and particularly the seven Trustworthy AI requirements proposed by the European Commission's High-Level Expert Group on AI, each identified risk is aligned with mitigation strategies based on EU regulatory and normative frameworks. Our analysis underscores the transversal importance of transparency and societal well-being across all risk categories and offers a structured lens for aligning AI systems with democratic values.By integrating democratic theory with practical governance tools, this paper offers a normative and actionable framework to guide research, regulation, and institutional design to support trustworthy, democratic AI. It provides scholars with a conceptual foundation to evaluate the democratic implications of AI, equips policymakers with structured criteria for ethical oversight, and helps technologists align system design with democratic principles. In doing so, it bridges the gap between ethical aspirations and operational realities, laying the groundwork for more inclusive, accountable, and resilient democratic systems in the algorithmic age.
View on arXiv@article{mentxaka2025_2505.13565, title={ Aligning Trustworthy AI with Democracy: A Dual Taxonomy of Opportunities and Risks }, author={ Oier Mentxaka and Natalia Díaz-Rodríguez and Mark Coeckelbergh and Marcos López de Prado and Emilia Gómez and David Fernández Llorca and Enrique Herrera-Viedma and Francisco Herrera }, journal={arXiv preprint arXiv:2505.13565}, year={ 2025 } }