105
11

Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities in Machine Learning Systems

Abstract

As real-world images come in varying sizes, the machine learning model is part of a larger system that includes an upstream image scaling algorithm. In this system, the model and the scaling algorithm have become attractive targets for numerous attacks, such as adversarial examples and the recent image-scaling attack. In response to these attacks, researchers have developed defense approaches that are tailored to attacks at each processing stage. As these defenses are developed in isolation, their underlying assumptions may not hold when viewing them from the perspective of an end-to-end machine learning system. Thus, it is necessary to study these attacks and defenses in the context of machine learning systems. In this paper, we investigate the interplay between vulnerabilities of the image scaling procedure and machine learning models in the challenging hard-label black-box setting. We propose a series of novel techniques to make a black-box attack exploit vulnerabilities in scaling algorithms, scaling defenses, and the final machine learning model in an end-to-end manner. Based on this scaling-aware attack, we reveal that most existing scaling defenses are ineffective under threat from downstream models. Moreover, we empirically observe that standard black-box attacks can significantly improve their performance by exploiting the vulnerable scaling procedure. We further demonstrate this problem on a commercial Image Analysis API with transfer-based black-box attacks.

View on arXiv
Comments on this paper