Advances in AI portend a new era of sophisticated disinformation operations. While individual AI systems already create convincing -- and at times misleading -- information, an imminent development is the emergence of malicious AI swarms. These systems can coordinate covertly, infiltrate communities, evade traditional detectors, and run continuous A/B tests, with round-the-clock persistence. The result can include fabricated grassroots consensus, fragmented shared reality, mass harassment, voter micro-suppression or mobilization, contamination of AI training data, and erosion of institutional trust. With democratic processes worldwide increasingly vulnerable, we urge a three-pronged response: (1) platform-side defenses -- always-on swarm-detection dashboards, pre-election high-fidelity swarm-simulation stress-tests, transparency audits, and optional client-side "AI shields" for users; (2) model-side safeguards -- standardized persuasion-risk tests, provenance-authenticating passkeys, and watermarking; and (3) system-level oversight -- a UN-backed AI Influence Observatory.
View on arXiv@article{schroeder2025_2506.06299, title={ How Malicious AI Swarms Can Threaten Democracy }, author={ Daniel Thilo Schroeder and Meeyoung Cha and Andrea Baronchelli and Nick Bostrom and Nicholas A. Christakis and David Garcia and Amit Goldenberg and Yara Kyrychenko and Kevin Leyton-Brown and Nina Lutz and Gary Marcus and Filippo Menczer and Gordon Pennycook and David G. Rand and Frank Schweitzer and Christopher Summerfield and Audrey Tang and Jay Van Bavel and Sander van der Linden and Dawn Song and Jonas R. Kunst }, journal={arXiv preprint arXiv:2506.06299}, year={ 2025 } }