5
0

Covert Attacks on Machine Learning Training in Passively Secure MPC

Abstract

Secure multiparty computation (MPC) allows data owners to train machine learning models on combined data while keeping the underlying training data private. The MPC threat model either considers an adversary who passively corrupts some parties without affecting their overall behavior, or an adversary who actively modifies the behavior of corrupt parties. It has been argued that in some settings, active security is not a major concern, partly because of the potential risk of reputation loss if a party is detected cheating.In this work we show explicit, simple, and effective attacks that an active adversary can run on existing passively secure MPC training protocols, while keeping essentially zero risk of the attack being detected. The attacks we show can compromise both the integrity and privacy of the model, including attacks reconstructing exact training data. Our results challenge the belief that a threat model that does not include malicious behavior by the involved parties may be reasonable in the context of PPML, motivating the use of actively secure protocols for training.

View on arXiv
@article{jagielski2025_2505.17092,
  title={ Covert Attacks on Machine Learning Training in Passively Secure MPC },
  author={ Matthew Jagielski and Daniel Escudero and Rahul Rachuri and Peter Scholl },
  journal={arXiv preprint arXiv:2505.17092},
  year={ 2025 }
}
Comments on this paper