51
2

Adversary Resilient Learned Bloom Filters

Abstract

A learned Bloom filter (LBF) combines a classical Bloom filter (CBF) with a learning model to reduce the amount of memory needed to represent a given set while achieving a target false positive rate (FPR). Provable security against adaptive adversaries that advertently attempt to increase FPR has been studied for CBFs. However, achieving adaptive security for LBFs is an open problem. In this paper, we close this gap and show how to achieve adaptive security for LBFs. In particular, we define several adaptive security notions capturing varying degrees of adversarial control, including full and partial adaptivity, in addition to LBF extensions of existing adversarial models for CBFs, including the Always-Bet and Bet-or-Pass notions. We propose two secure LBF constructions, PRP-LBF and Cuckoo-LBF, and formally prove their security under these models, assuming the existence of one-way functions. Based on our analysis and use case evaluations, our constructions achieve strong security guarantees while maintaining competitive FPR and memory overhead.

View on arXiv
@article{almashaqbeh2025_2409.06556,
  title={ Adversary Resilient Learned Bloom Filters },
  author={ Ghada Almashaqbeh and Allison Bishop and Hayder Tirmazi },
  journal={arXiv preprint arXiv:2409.06556},
  year={ 2025 }
}
Comments on this paper