ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.02410
  4. Cited By
Partially Interpretable Estimators (PIE): Black-Box-Refined
  Interpretable Machine Learning

Partially Interpretable Estimators (PIE): Black-Box-Refined Interpretable Machine Learning

6 May 2021
Tong Wang
Jingyi Yang
Yunyi Li
Boxiang Wang
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Partially Interpretable Estimators (PIE): Black-Box-Refined Interpretable Machine Learning"

4 / 4 papers shown
Title
Neural Additive Models for Location Scale and Shape: A Framework for
  Interpretable Neural Regression Beyond the Mean
Neural Additive Models for Location Scale and Shape: A Framework for Interpretable Neural Regression Beyond the Mean
Anton Thielmann
René-Marcel Kruse
Thomas Kneib
Benjamin Säfken
88
13
0
27 Jan 2023
Sparse Interaction Additive Networks via Feature Interaction Detection
  and Sparse Selection
Sparse Interaction Additive Networks via Feature Interaction Detection and Sparse Selection
James Enouen
Yan Liu
72
20
0
19 Sep 2022
Post-hoc Concept Bottleneck Models
Post-hoc Concept Bottleneck Models
Mert Yuksekgonul
Maggie Wang
James Zou
245
198
0
31 May 2022
Interpretable Machine Learning: Fundamental Principles and 10 Grand
  Challenges
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges
Cynthia Rudin
Chaofan Chen
Zhi Chen
Haiyang Huang
Lesia Semenova
Chudi Zhong
FaMLAI4CELRM
248
678
0
20 Mar 2021
1