ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.06777
17
0

Improving the Validity of Decision Trees as Explanations

11 June 2023
Jiri Nemecek
Tomás Pevný
Jakub Mareˇcek
    FAtt
ArXivPDFHTML
Abstract

In classification and forecasting with tabular data, one often utilizes tree-based models. Those can be competitive with deep neural networks on tabular data and, under some conditions, explainable. The explainability depends on the depth of the tree and the accuracy in each leaf of the tree. We point out that decision trees containing leaves with unbalanced accuracy can provide misleading explanations. Low-accuracy leaves give less valid explanations, which could be interpreted as unfairness among subgroups utilizing these explanations. Here, we train a shallow tree with the objective of minimizing the maximum misclassification error across all leaf nodes. The shallow tree provides a global explanation, while the overall statistical performance of the shallow tree can become comparable to state-of-the-art methods (e.g., well-tuned XGBoost) by extending the leaves with further models.

View on arXiv
Comments on this paper