Interpretable DNFs

A classifier is considered interpretable if each of its decisions has an explanation which is small enough to be easily understood by a human user. A DNF formula can be seen as a binary classifier over boolean domains. The size of an explanation of a positive decision taken by a DNF is bounded by the size of the terms in , since we can explain a positive decision by giving a term of that evaluates to true. Since both positive and negative decisions must be explained, we consider that interpretable DNFs are those for which both and can be expressed as DNFs composed of terms of bounded size. In this paper, we study the family of -DNFs whose complements can also be expressed as -DNFs. We compare two such families, namely depth- decision trees and nested -DNFs, a novel family of models. Experiments indicate that nested -DNFs are an interesting alternative to decision trees in terms of interpretability and accuracy.
View on arXiv@article{cooper2025_2505.21212, title={ Interpretable DNFs }, author={ Martin C. Cooper and Imane Bousdira and Clément Carbonnel }, journal={arXiv preprint arXiv:2505.21212}, year={ 2025 } }