We consider the problem of finding the set of architectural parameters for a chosen deep neural network which is optimal under three metrics: parameter size, inference speed, and error rate. In this paper we state the problem formally, and present an approximation algorithm that, for a large subset of instances behaves like an FPTAS with an approximation error of , and that runs in steps, where and are input parameters; is the batch size; denotes the cardinality of the largest weight set assignment; and and are the cardinalities of the candidate architecture and hyperparameter spaces, respectively.
View on arXiv