ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1610.01256
  4. Cited By
On the Safety of Machine Learning: Cyber-Physical Systems, Decision
  Sciences, and Data Products

On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products

5 October 2016
Kush R. Varshney
H. Alemzadeh
ArXivPDFHTML

Papers citing "On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products"

25 / 75 papers shown
Title
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
49
371
0
30 Apr 2020
Machine Learning Algorithms for Financial Asset Price Forecasting
Machine Learning Algorithms for Financial Asset Price Forecasting
Philip Ndikum
AIFin
35
16
0
31 Mar 2020
Training Adversarial Agents to Exploit Weaknesses in Deep Control
  Policies
Training Adversarial Agents to Exploit Weaknesses in Deep Control Policies
Sampo Kuutti
Saber Fallah
Richard Bowden
AAML
17
49
0
27 Feb 2020
Aleatoric and Epistemic Uncertainty with Random Forests
Aleatoric and Epistemic Uncertainty with Random Forests
M. Shaker
Eyke Hüllermeier
BDL
UD
PER
17
70
0
03 Jan 2020
A Survey of Deep Learning Applications to Autonomous Vehicle Control
A Survey of Deep Learning Applications to Autonomous Vehicle Control
Sampo Kuutti
Richard Bowden
Yaochu Jin
P. Barber
Saber Fallah
36
507
0
23 Dec 2019
Automated Dependence Plots
Automated Dependence Plots
David I. Inouye
Liu Leqi
Joon Sik Kim
Bryon Aragam
Pradeep Ravikumar
12
1
0
02 Dec 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
41
6,125
0
22 Oct 2019
Aleatoric and Epistemic Uncertainty in Machine Learning: An Introduction
  to Concepts and Methods
Aleatoric and Epistemic Uncertainty in Machine Learning: An Introduction to Concepts and Methods
Eyke Hüllermeier
Willem Waegeman
PER
UD
87
1,359
0
21 Oct 2019
A Survey of Deep Learning Techniques for Autonomous Driving
A Survey of Deep Learning Techniques for Autonomous Driving
Sorin Grigorescu
Bogdan Trasnea
Tiberiu T. Cocias
G. Macesanu
3DPC
40
1,377
0
17 Oct 2019
Towards Safe Machine Learning for CPS: Infer Uncertainty from Training
  Data
Towards Safe Machine Learning for CPS: Infer Uncertainty from Training Data
Xiaozhe Gu
Arvind Easwaran
19
29
0
11 Sep 2019
Understanding artificial intelligence ethics and safety
Understanding artificial intelligence ethics and safety
David Leslie
FaML
AI4TS
30
345
0
11 Jun 2019
Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of
  Key Ideas and Publications, and Bibliography for Explainable AI
Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI
Shane T. Mueller
R. Hoffman
W. Clancey
Abigail Emrey
Gary Klein
XAI
18
285
0
05 Feb 2019
Bias Mitigation Post-processing for Individual and Group Fairness
Bias Mitigation Post-processing for Individual and Group Fairness
P. Lohia
Karthikeyan N. Ramamurthy
M. Bhide
Diptikalyan Saha
Kush R. Varshney
Ruchir Puri
FaML
13
155
0
14 Dec 2018
Probabilistic Object Detection: Definition and Evaluation
Probabilistic Object Detection: Definition and Evaluation
David Hall
Feras Dayoub
John Skinner
Haoyang Zhang
Dimity Miller
Peter Corke
G. Carneiro
A. Angelova
Niko Sünderhauf
UQCV
38
111
0
27 Nov 2018
Stop Explaining Black Box Machine Learning Models for High Stakes
  Decisions and Use Interpretable Models Instead
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead
Cynthia Rudin
ELM
FaML
22
218
0
26 Nov 2018
Promoting Distributed Trust in Machine Learning and Computational
  Simulation via a Blockchain Network
Promoting Distributed Trust in Machine Learning and Computational Simulation via a Blockchain Network
Nelson Bore
R. Raman
Isaac M. Markus
S. Remy
Oliver E. Bent
...
E. Pissadaki
Biplav Srivastava
Roman Vaculin
Kush R. Varshney
Komminist Weldemariam
13
8
0
25 Oct 2018
FactSheets: Increasing Trust in AI Services through Supplier's
  Declarations of Conformity
FactSheets: Increasing Trust in AI Services through Supplier's Declarations of Conformity
Matthew Arnold
Rachel K. E. Bellamy
Michael Hind
Stephanie Houde
S. Mehta
...
Darrell Reimer
Alexandra Olteanu
David Piorkowski
Jason Tsay
Kush R. Varshney
HILM
16
138
0
22 Aug 2018
Using Machine Learning Safely in Automotive Software: An Assessment and
  Adaption of Software Process Requirements in ISO 26262
Using Machine Learning Safely in Automotive Software: An Assessment and Adaption of Software Process Requirements in ISO 26262
Rick Salay
Krzysztof Czarnecki
25
69
0
05 Aug 2018
Interpretable to Whom? A Role-based Model for Analyzing Interpretable
  Machine Learning Systems
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
Richard J. Tomsett
Dave Braines
Daniel Harborne
Alun D. Preece
Supriyo Chakraborty
FaML
29
164
0
20 Jun 2018
To Trust Or Not To Trust A Classifier
To Trust Or Not To Trust A Classifier
Heinrich Jiang
Been Kim
Melody Y. Guan
Maya R. Gupta
UQCV
30
464
0
30 May 2018
Improving Confidence Estimates for Unfamiliar Examples
Improving Confidence Estimates for Unfamiliar Examples
Zhizhong Li
Derek Hoiem
33
9
0
09 Apr 2018
How an Electrical Engineer Became an Artificial Intelligence Researcher,
  a Multiphase Active Contours Analysis
How an Electrical Engineer Became an Artificial Intelligence Researcher, a Multiphase Active Contours Analysis
Kush R. Varshney
AI4CE
11
0
0
29 Mar 2018
Predict Responsibly: Improving Fairness and Accuracy by Learning to
  Defer
Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer
David Madras
T. Pitassi
R. Zemel
FaML
14
215
0
17 Nov 2017
Detecting Statistical Interactions from Neural Network Weights
Detecting Statistical Interactions from Neural Network Weights
Michael Tsang
Dehua Cheng
Yan Liu
25
192
0
14 May 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,698
0
28 Feb 2017
Previous
12