ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1502.04269
  4. Cited By
Supersparse Linear Integer Models for Optimized Medical Scoring Systems

Supersparse Linear Integer Models for Optimized Medical Scoring Systems

15 February 2015
Berk Ustun
Cynthia Rudin
ArXivPDFHTML

Papers citing "Supersparse Linear Integer Models for Optimized Medical Scoring Systems"

22 / 122 papers shown
Title
Actionable Recourse in Linear Classification
Actionable Recourse in Linear Classification
Berk Ustun
Alexander Spangher
Yang Liu
FaML
42
539
0
18 Sep 2018
Knowledge-based Transfer Learning Explanation
Knowledge-based Transfer Learning Explanation
Jiaoyan Chen
Freddy Lecue
Jeff Z. Pan
Ian Horrocks
Huajun Chen
11
42
0
22 Jul 2018
Confounding-Robust Policy Improvement
Confounding-Robust Policy Improvement
Nathan Kallus
Angela Zhou
CML
OffRL
40
152
0
22 May 2018
Manipulating and Measuring Model Interpretability
Manipulating and Measuring Model Interpretability
Forough Poursabzi-Sangdeh
D. Goldstein
Jake M. Hofman
Jennifer Wortman Vaughan
Hanna M. Wallach
32
685
0
21 Feb 2018
Gaining Free or Low-Cost Transparency with Interpretable Partial
  Substitute
Gaining Free or Low-Cost Transparency with Interpretable Partial Substitute
Tong Wang
39
8
0
12 Feb 2018
Fairness and Accountability Design Needs for Algorithmic Support in
  High-Stakes Public Sector Decision-Making
Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making
Michael Veale
Max Van Kleek
Reuben Binns
25
410
0
03 Feb 2018
How do Humans Understand Explanations from Machine Learning Systems? An
  Evaluation of the Human-Interpretability of Explanation
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAtt
XAI
41
241
0
02 Feb 2018
Embedding Deep Networks into Visual Explanations
Embedding Deep Networks into Visual Explanations
Zhongang Qi
Saeed Khorram
Fuxin Li
32
27
0
15 Sep 2017
Interpretability via Model Extraction
Interpretability via Model Extraction
Osbert Bastani
Carolyn Kim
Hamsa Bastani
FAtt
29
129
0
29 Jun 2017
Interpreting Blackbox Models via Model Extraction
Interpreting Blackbox Models via Model Extraction
Osbert Bastani
Carolyn Kim
Hamsa Bastani
FAtt
35
170
0
23 May 2017
PreCog: Improving Crowdsourced Data Quality Before Acquisition
PreCog: Improving Crowdsourced Data Quality Before Acquisition
H. Nilforoshan
Jiannan Wang
Eugene Wu
14
3
0
07 Apr 2017
Learning Certifiably Optimal Rule Lists for Categorical Data
Learning Certifiably Optimal Rule Lists for Categorical Data
E. Angelino
Nicholas Larus-Stone
Daniel Alabi
Margo Seltzer
Cynthia Rudin
62
195
0
06 Apr 2017
Simple rules for complex decisions
Simple rules for complex decisions
Jongbin Jung
Connor Concannon
Ravi Shroff
Sharad Goel
D. Goldstein
CML
25
104
0
15 Feb 2017
Programs as Black-Box Explanations
Programs as Black-Box Explanations
Sameer Singh
Marco Tulio Ribeiro
Carlos Guestrin
FAtt
24
54
0
22 Nov 2016
Learning Optimized Risk Scores
Learning Optimized Risk Scores
Berk Ustun
Cynthia Rudin
17
82
0
01 Oct 2016
Preterm Birth Prediction: Deriving Stable and Interpretable Rules from
  High Dimensional Data
Preterm Birth Prediction: Deriving Stable and Interpretable Rules from High Dimensional Data
Truyen Tran
Wei Luo
Dinh Q. Phung
Jonathan Morris
Kristen Rickard
Svetha Venkatesh
17
20
0
28 Jul 2016
Interpretable Machine Learning Models for the Digital Clock Drawing Test
Interpretable Machine Learning Models for the Digital Clock Drawing Test
William Souillard-Mandar
Randall Davis
Cynthia Rudin
R. Au
Dana L. Penney
40
13
0
23 Jun 2016
Model-Agnostic Interpretability of Machine Learning
Model-Agnostic Interpretability of Machine Learning
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
29
833
0
16 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
49
16,673
0
16 Feb 2016
Learning Optimized Or's of And's
Learning Optimized Or's of And's
Tong Wang
Cynthia Rudin
27
25
0
06 Nov 2015
Or's of And's for Interpretable Classification, with Application to
  Context-Aware Recommender Systems
Or's of And's for Interpretable Classification, with Application to Context-Aware Recommender Systems
Tong Wang
Cynthia Rudin
Finale Doshi-Velez
Yimin Liu
Erica Klampfl
P. MacNeille
11
41
0
28 Apr 2015
Interpretable Classification Models for Recidivism Prediction
Interpretable Classification Models for Recidivism Prediction
J. Zeng
Berk Ustun
Cynthia Rudin
FaML
28
247
0
26 Mar 2015
Previous
123