0
0

EVA-S2PMLP: Secure and Scalable Two-Party MLP via Spatial Transformation

Shizhao Peng
Shoumo Li
Tianle Tao
Main:16 Pages
10 Figures
Bibliography:1 Pages
6 Tables
Appendix:1 Pages
Abstract

Privacy-preserving neural network training in vertically partitioned scenarios is vital for secure collaborative modeling across institutions. This paper presents \textbf{EVA-S2PMLP}, an Efficient, Verifiable, and Accurate Secure Two-Party Multi-Layer Perceptron framework that introduces spatial-scale optimization for enhanced privacy and performance. To enable reliable computation under real-number domain, EVA-S2PMLP proposes a secure transformation pipeline that maps scalar inputs to vector and matrix spaces while preserving correctness. The framework includes a suite of atomic protocols for linear and non-linear secure computations, with modular support for secure activation, matrix-vector operations, and loss evaluation. Theoretical analysis confirms the reliability, security, and asymptotic complexity of each protocol. Extensive experiments show that EVA-S2PMLP achieves high inference accuracy and significantly reduced communication overhead, with up to 12.3×12.3\times improvement over baselines. Evaluation on benchmark datasets demonstrates that the framework maintains model utility while ensuring strict data confidentiality, making it a practical solution for privacy-preserving neural network training in finance, healthcare, and cross-organizational AI applications.

View on arXiv
@article{peng2025_2506.15102,
  title={ EVA-S2PMLP: Secure and Scalable Two-Party MLP via Spatial Transformation },
  author={ Shizhao Peng and Shoumo Li and Tianle Tao },
  journal={arXiv preprint arXiv:2506.15102},
  year={ 2025 }
}
Comments on this paper