Learning accurate and interpretable tree-based models

Decision trees and their ensembles are popular in machine learning as easy-to-understand models. Several techniques have been proposed in the literature for learning tree-based classifiers, with different techniques working well for data from different domains. In this work, we develop approaches to design tree-based learning algorithms given repeated access to data from the same domain. We study multiple formulations covering different aspects and popular techniques for learning decision tree based approaches. We propose novel parameterized classes of node splitting criteria in top-down algorithms, which interpolate between popularly used entropy and Gini impurity based criteria, and provide theoretical bounds on the number of samples needed to learn the splitting function appropriate for the data at hand. We also study the sample complexity of tuning prior parameters in Bayesian decision tree learning, and extend our results to decision tree regression. We further consider the problem of tuning hyperparameters in pruning the decision tree for classical pruning algorithms including min-cost complexity pruning. In addition, our techniques can be used to optimize the explainability versus accuracy trade-off when using decision trees. We extend our results to tuning popular tree-based ensembles, including random forests and gradient-boosted trees. We demonstrate the significance of our approach on real world datasets by learning data-specific decision trees which are simultaneously more accurate and interpretable.
View on arXiv@article{balcan2025_2405.15911, title={ Learning accurate and interpretable tree-based models }, author={ Maria-Florina Balcan and Dravyansh Sharma }, journal={arXiv preprint arXiv:2405.15911}, year={ 2025 } }