Artificial Intelligence (AI) governance is the practice of establishing frameworks, policies, and procedures to ensure the responsible, ethical, and safe development and deployment of AI systems. Although AI governance is a core pillar of Responsible AI, current literature still lacks synthesis across such governance frameworks and practices. Objective: To identify which frameworks, principles, mechanisms, and stakeholder roles are emphasized in secondary literature on AI governance. Method: We conducted a rapid tertiary review of nine peer-reviewed secondary studies from IEEE and ACM (20202024), using structured inclusion criteria and thematic semantic synthesis. Results: The most cited frameworks include the EU AI Act and NIST RMF; transparency and accountability are the most common principles. Few reviews detail actionable governance mechanisms or stakeholder strategies. Conclusion: The review consolidates key directions in AI governance and highlights gaps in empirical validation and inclusivity. Findings inform both academic inquiry and practical adoption in organizations.
View on arXiv@article{ribeiro2025_2505.23417, title={ Toward Effective AI Governance: A Review of Principles }, author={ Danilo Ribeiro and Thayssa Rocha and Gustavo Pinto and Bruno Cartaxo and Marcelo Amaral and Nicole Davila and Ana Camargo }, journal={arXiv preprint arXiv:2505.23417}, year={ 2025 } }