ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.11748
  4. Cited By
The Need for Interpretable Features: Motivation and Taxonomy

The Need for Interpretable Features: Motivation and Taxonomy

23 February 2022
Alexandra Zytek
Ignacio Arnaldo
Dongyu Liu
Laure Berti-Equille
K. Veeramachaneni
    FAttXAI
ArXiv (abs)PDFHTML

Papers citing "The Need for Interpretable Features: Motivation and Taxonomy"

13 / 13 papers shown
Title
MTV: Visual Analytics for Detecting, Investigating, and Annotating
  Anomalies in Multivariate Time Series
MTV: Visual Analytics for Detecting, Investigating, and Annotating Anomalies in Multivariate Time Series
Dongyu Liu
Sarah Alnegheimish
Alexandra Zytek
K. Veeramachaneni
AI4TS
56
21
0
10 Dec 2021
VBridge: Connecting the Dots Between Features and Data to Explain
  Healthcare Models
VBridge: Connecting the Dots Between Features and Data to Explain Healthcare Models
Furui Cheng
Dongyu Liu
F. Du
Yanna Lin
Alexandra Zytek
Haomin Li
Huamin Qu
K. Veeramachaneni
46
38
0
04 Aug 2021
Sibyl: Understanding and Addressing the Usability Challenges of Machine
  Learning In High-Stakes Decision Making
Sibyl: Understanding and Addressing the Usability Challenges of Machine Learning In High-Stakes Decision Making
Alexandra Zytek
Dongyu Liu
R. Vaithianathan
K. Veeramachaneni
50
48
0
02 Mar 2021
Dissonance Between Human and Machine Understanding
Dissonance Between Human and Machine Understanding
Zijian Zhang
Jaspreet Singh
U. Gadiraju
Avishek Anand
111
74
0
18 Jan 2021
Interpretable Machine Learning -- A Brief History, State-of-the-Art and
  Challenges
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TSAI4CE
87
403
0
19 Oct 2020
Human Factors in Model Interpretability: Industry Practices, Challenges,
  and Needs
Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs
Sungsoo Ray Hong
Jessica Hullman
E. Bertini
HAI
73
194
0
23 Apr 2020
The POLAR Framework: Polar Opposites Enable Interpretability of
  Pre-Trained Word Embeddings
The POLAR Framework: Polar Opposites Enable Interpretability of Pre-Trained Word Embeddings
Binny Mathew
Sandipan Sikdar
Florian Lemmerich
M. Strohmaier
45
36
0
27 Jan 2020
Interpretable and Differentially Private Predictions
Interpretable and Differentially Private Predictions
Frederik Harder
Matthias Bauer
Mijung Park
FAtt
50
53
0
05 Jun 2019
Stakeholders in Explainable AI
Stakeholders in Explainable AI
Alun D. Preece
Daniel Harborne
Dave Braines
Richard J. Tomsett
Supriyo Chakraborty
45
157
0
29 Sep 2018
Feature Engineering for Predictive Modeling using Reinforcement Learning
Feature Engineering for Predictive Modeling using Reinforcement Learning
Udayan Khurana
Horst Samulowitz
D. Turaga
68
187
0
21 Sep 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
250
4,272
0
22 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,002
0
22 May 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAIFaML
405
3,809
0
28 Feb 2017
1