ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.14082
50
56

Sample-Efficient Safety Assurances using Conformal Prediction

28 September 2021
Rachel Luo
Shengjia Zhao
Jonathan Kuck
B. Ivanovic
Silvio Savarese
Edward Schmerling
Marco Pavone
ArXivPDFHTML
Abstract

When deploying machine learning models in high-stakes robotics applications, the ability to detect unsafe situations is crucial. Early warning systems can provide alerts when an unsafe situation is imminent (in the absence of corrective action). To reliably improve safety, these warning systems should have a provable false negative rate; i.e. of the situations that are unsafe, fewer than ϵ\epsilonϵ will occur without an alert. In this work, we present a framework that combines a statistical inference technique known as conformal prediction with a simulator of robot/environment dynamics, in order to tune warning systems to provably achieve an ϵ\epsilonϵ false negative rate using as few as 1/ϵ1/\epsilon1/ϵ data points. We apply our framework to a driver warning system and a robotic grasping application, and empirically demonstrate guaranteed false negative rate while also observing low false detection (positive) rate.

View on arXiv
Comments on this paper