Risk Analysis and Design Against Adversarial Actions

Learning models capable of providing reliable predictions in the face of adversarial actions has become a central focus of the machine learning community in recent years. This challenge arises from observing that data encountered at deployment time often deviate from the conditions under which the model was trained. In this paper, we address deployment-time adversarial actions and propose a versatile, well-principled framework to evaluate the model's robustness against attacks of diverse types and intensities. While we initially focus on Support Vector Regression (SVR), the proposed approach extends naturally to the broad domain of learning via relaxed optimization techniques. Our results enable an assessment of the model vulnerability without requiring additional test data and operate in a distribution-free setup. These results not only provide a tool to enhance trust in the model's applicability but also aid in selecting among competing alternatives. Later in the paper, we show that our findings also offer useful insights for establishing new results within the out-of-distribution framework.
View on arXiv@article{campi2025_2505.01130, title={ Risk Analysis and Design Against Adversarial Actions }, author={ Marco C. Campi and Algo Carè and Luis G. Crespo and Simone Garatti and Federico A. Ramponi }, journal={arXiv preprint arXiv:2505.01130}, year={ 2025 } }