86
3

Adversary Resilient Learned Bloom Filters

Main:27 Pages
9 Figures
Bibliography:2 Pages
1 Tables
Abstract

Creating an adversary resilient Learned Bloom Filter \cite{learnedindexstructures} with provable guarantees is an open problem \cite{reviriego1}. We define a strong adversarial model for the Learned Bloom Filter. We also construct two adversary resilient variants of the Learned Bloom Filter called the Uptown Bodega Filter and the Downtown Bodega Filter. Our adversarial model extends an existing adversarial model designed for the Classical (i.e not ``Learned'') Bloom Filter by Naor Yogev~\cite{moni1} and considers computationally bounded adversaries that run in probabilistic polynomial time (PPT). We show that if pseudo-random permutations exist, then a secure Learned Bloom Filter may be constructed with λ\lambda extra bits of memory and at most one extra pseudo-random permutation in the critical path. We further show that, if pseudo-random permutations exist, then a \textit{high utility} Learned Bloom Filter may be constructed with 2λ2\lambda extra bits of memory and at most one extra pseudo-random permutation in the critical path. Finally, we construct a hybrid adversarial model for the case where a fraction of the workload is chosen by an adversary. We show realistic scenarios where using the Downtown Bodega Filter gives better performance guarantees compared to alternative approaches in this hybrid model.

View on arXiv
@article{almashaqbeh2025_2409.06556,
  title={ Adversary Resilient Learned Bloom Filters },
  author={ Ghada Almashaqbeh and Allison Bishop and Hayder Tirmazi },
  journal={arXiv preprint arXiv:2409.06556},
  year={ 2025 }
}
Comments on this paper