On the Power of Learning-Augmented Search Trees

We study learning-augmented binary search trees (BSTs) via Treaps with carefully designed priorities. The result is a simple search tree in which the depth of each item is determined by its predicted weight . Specifically, each item is assigned a composite priority of where is the uniform random variable. By choosing as the relative frequency of , the resulting search trees achieve static optimality. This approach generalizes the recent learning-augmented BSTs [Lin-Luo-Woodruff ICML '22], which only work for Zipfian distributions, by extending them to arbitrary input distributions. Furthermore, we demonstrate that our method can be generalized to a B-Tree data structure using the B-Treap approach [Golovin ICALP '09]. Our search trees are also capable of leveraging localities in the access sequence through online self-reorganization, thereby achieving the working-set property. Additionally, they are robust to prediction errors and support dynamic operations, such as insertions, deletions, and prediction updates. We complement our analysis with an empirical study, demonstrating that our method outperforms prior work and classic data structures.
View on arXiv@article{chen2025_2211.09251, title={ On the Power of Learning-Augmented Search Trees }, author={ Jingbang Chen and Xinyuan Cao and Alicia Stepin and Li Chen }, journal={arXiv preprint arXiv:2211.09251}, year={ 2025 } }