In large-data applications, it is desirable to design algorithms with a high degree of parallelization. In the context of submodular optimization, adaptive complexity has become a widely-used measure of an algorithm's "sequentiality". Algorithms in the adaptive model proceed in rounds, and can issue polynomially many queries to a function in each round. The queries in each round must be independent, produced by a computation that depends only on query results obtained in previous rounds. In this work, we examine two fundamental variants of submodular maximization in the adaptive complexity model: cardinality-constrained monotone maximization, and unconstrained non-mono-tone maximization. Our main result is that an -round algorithm for cardinality-constrained monotone maximization cannot achieve an approximation factor better than , for any (where is some constant). This is the first result showing that the number of rounds must blow up polynomially large as we approach the optimal factor of . For the unconstrained non-monotone maximization problem, we show a positive result: For every instance, and every , either we obtain a -approximation in round, or a -approximation in rounds. In particular (and in contrast to the cardinality-constrained case), there cannot be an instance where (i) it is impossible to achieve an approximation factor better than regardless of the number of rounds, and (ii) it takes rounds to achieve a factor of .
View on arXiv