20
0

Interpretable DNFs

Abstract

A classifier is considered interpretable if each of its decisions has an explanation which is small enough to be easily understood by a human user. A DNF formula can be seen as a binary classifier κ\kappa over boolean domains. The size of an explanation of a positive decision taken by a DNF κ\kappa is bounded by the size of the terms in κ\kappa, since we can explain a positive decision by giving a term of κ\kappa that evaluates to true. Since both positive and negative decisions must be explained, we consider that interpretable DNFs are those κ\kappa for which both κ\kappa and κ\overline{\kappa} can be expressed as DNFs composed of terms of bounded size. In this paper, we study the family of kk-DNFs whose complements can also be expressed as kk-DNFs. We compare two such families, namely depth-kk decision trees and nested kk-DNFs, a novel family of models. Experiments indicate that nested kk-DNFs are an interesting alternative to decision trees in terms of interpretability and accuracy.

View on arXiv
@article{cooper2025_2505.21212,
  title={ Interpretable DNFs },
  author={ Martin C. Cooper and Imane Bousdira and Clément Carbonnel },
  journal={arXiv preprint arXiv:2505.21212},
  year={ 2025 }
}
Comments on this paper