ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2001.08001
  4. Cited By
Safety Concerns and Mitigation Approaches Regarding the Use of Deep
  Learning in Safety-Critical Perception Tasks

Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks

22 January 2020
Oliver Willers
Sebastian Sudholt
Shervin Raafatnia
Stephanie Abrecht
ArXivPDFHTML

Papers citing "Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks"

31 / 31 papers shown
Title
A Systematic Literature Review on Safety of the Intended Functionality for Automated Driving Systems
A Systematic Literature Review on Safety of the Intended Functionality for Automated Driving Systems
Milin Patel
Rolf Jung
M. Khatun
69
0
0
04 Mar 2025
Landscape of AI safety concerns - A methodology to support safety
  assurance for AI-based autonomous systems
Landscape of AI safety concerns - A methodology to support safety assurance for AI-based autonomous systems
Ronald Schnitzer
Lennart Kilian
Simon Roessner
Konstantinos Theodorou
Sonja Zillner
89
0
0
18 Dec 2024
Real-Time Truly-Coupled Lidar-Inertial Motion Correction and
  Spatiotemporal Dynamic Object Detection
Real-Time Truly-Coupled Lidar-Inertial Motion Correction and Spatiotemporal Dynamic Object Detection
Cedric Le Gentil
Raphael Falque
Teresa Vidal-Calleja
33
4
0
07 Oct 2024
Learning-Based Error Detection System for Advanced Vehicle Instrument
  Cluster Rendering
Learning-Based Error Detection System for Advanced Vehicle Instrument Cluster Rendering
Cornelius Bürkle
Fabian Oboril
Kay-Ulrich Scholl
18
0
0
04 Sep 2024
Investigating Calibration and Corruption Robustness of Post-hoc Pruned
  Perception CNNs: An Image Classification Benchmark Study
Investigating Calibration and Corruption Robustness of Post-hoc Pruned Perception CNNs: An Image Classification Benchmark Study
Pallavi Mitra
Gesina Schwalbe
Nadja Klein
AAML
36
1
0
31 May 2024
Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic
  Review
Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review
Anton Kuznietsov
Balint Gyevnar
Cheng Wang
Steven Peters
Stefano V. Albrecht
XAI
28
26
0
08 Feb 2024
A Safety-Adapted Loss for Pedestrian Detection in Automated Driving
A Safety-Adapted Loss for Pedestrian Detection in Automated Driving
Maria Lyssenko
Piyush Pimplikar
Maarten Bieshaar
Farzad Nozarian
Rudolph Triebel
30
2
0
05 Feb 2024
Characterizing Perspective Error in Voxel-Based Lidar Scan Matching
Characterizing Perspective Error in Voxel-Based Lidar Scan Matching
Jason Rife
Matthew McDermott
24
3
0
24 Jan 2024
Mathematical Algorithm Design for Deep Learning under Societal and
  Judicial Constraints: The Algorithmic Transparency Requirement
Mathematical Algorithm Design for Deep Learning under Societal and Judicial Constraints: The Algorithmic Transparency Requirement
Holger Boche
Adalbert Fono
Gitta Kutyniok
FaML
31
4
0
18 Jan 2024
Synergistic Perception and Control Simplex for Verifiable Safe Vertical
  Landing
Synergistic Perception and Control Simplex for Verifiable Safe Vertical Landing
Ayoosh Bansal
Yang Zhao
James Zhu
Sheng Cheng
Yuliang Gu
Hyung-Jin Yoon
Hunmin Kim
N. Hovakimyan
Lui Sha
26
2
0
05 Dec 2023
Labeling Neural Representations with Inverse Recognition
Labeling Neural Representations with Inverse Recognition
Kirill Bykov
Laura Kopf
Shinichi Nakajima
Marius Kloft
Marina M.-C. Höhne
BDL
29
15
0
22 Nov 2023
AI Hazard Management: A framework for the systematic management of root
  causes for AI risks
AI Hazard Management: A framework for the systematic management of root causes for AI risks
Ronald Schnitzer
Andreas Hapfelmeier
Sven Gaube
Sonja Zillner
24
3
0
25 Oct 2023
Deep Learning Safety Concerns in Automated Driving Perception
Deep Learning Safety Concerns in Automated Driving Perception
Stephanie Abrecht
Alexander Hirsch
Shervin Raafatnia
Matthias Woehrle
26
12
0
07 Sep 2023
SURE-Val: Safe Urban Relevance Extension and Validation
SURE-Val: Safe Urban Relevance Extension and Validation
Kai Storms
Kent Mori
S. Peters
25
3
0
04 Aug 2023
Combating noisy labels in object detection datasets
Combating noisy labels in object detection datasets
K. Chachula
Jakub Lyskawa
Bartlomiej Olber
Piotr Fratczak
A. Popowicz
Krystian Radlak
NoLa
26
4
0
25 Nov 2022
Perception Simplex: Verifiable Collision Avoidance in Autonomous
  Vehicles Amidst Obstacle Detection Faults
Perception Simplex: Verifiable Collision Avoidance in Autonomous Vehicles Amidst Obstacle Detection Faults
Ayoosh Bansal
Hunmin Kim
Simon Yu
Bo-wen Li
N. Hovakimyan
Marco Caccamo
L. Sha
AAML
37
4
0
04 Sep 2022
Verifiable Obstacle Detection
Verifiable Obstacle Detection
Ayoosh Bansal
Hunmin Kim
Simon Yu
Bo-Yi Li
N. Hovakimyan
Marco Caccamo
L. Sha
28
6
0
30 Aug 2022
Mitigating Shadows in Lidar Scan Matching using Spherical Voxels
Mitigating Shadows in Lidar Scan Matching using Spherical Voxels
Matthew McDermott
Jason Rife
21
6
0
01 Aug 2022
Tailored Uncertainty Estimation for Deep Learning Systems
Tailored Uncertainty Estimation for Deep Learning Systems
Joachim Sicking
Maram Akila
Jan David Schneider
Fabian Hüger
Peter Schlicht
Tim Wirtz
Stefan Wrobel
UQCV
29
1
0
29 Apr 2022
Ergo, SMIRK is Safe: A Safety Case for a Machine Learning Component in a
  Pedestrian Automatic Emergency Brake System
Ergo, SMIRK is Safe: A Safety Case for a Machine Learning Component in a Pedestrian Automatic Emergency Brake System
Markus Borg
Jens Henriksson
Kasper Socha
Olof Lennartsson
Elias Sonnsjo Lonegren
T. Bui
Piotr Tomaszewski
S. Sathyamoorthy
Sebastian Brink
M. H. Moghadam
35
23
0
16 Apr 2022
Security for Machine Learning-based Software Systems: a survey of
  threats, practices and challenges
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges
Huaming Chen
Muhammad Ali Babar
AAML
37
21
0
12 Jan 2022
Is the Rush to Machine Learning Jeopardizing Safety? Results of a Survey
Is the Rush to Machine Learning Jeopardizing Safety? Results of a Survey
M. Askarpour
Alan Wassyng
M. Lawford
R. Paige
Z. Diskin
23
0
0
29 Nov 2021
Single Layer Predictive Normalized Maximum Likelihood for
  Out-of-Distribution Detection
Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection
Koby Bibas
M. Feder
Tal Hassner
OODD
33
24
0
18 Oct 2021
Single-Step Adversarial Training for Semantic Segmentation
Single-Step Adversarial Training for Semantic Segmentation
D. Wiens
Barbara Hammer
SSeg
AAML
20
1
0
30 Jun 2021
Exposing Previously Undetectable Faults in Deep Neural Networks
Exposing Previously Undetectable Faults in Deep Neural Networks
Isaac Dunn
Hadrien Pouget
Daniel Kroening
T. Melham
AAML
26
28
0
01 Jun 2021
Quality Assurance Challenges for Machine Learning Software Applications
  During Software Development Life Cycle Phases
Quality Assurance Challenges for Machine Learning Software Applications During Software Development Life Cycle Phases
Md. Abdullah Al Alamin
Gias Uddin
32
11
0
03 May 2021
Inspect, Understand, Overcome: A Survey of Practical Methods for AI
  Safety
Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
Sebastian Houben
Stephanie Abrecht
Maram Akila
Andreas Bär
Felix Brockherde
...
Serin Varghese
Michael Weber
Sebastian J. Wirkert
Tim Wirtz
Matthias Woehrle
AAML
13
58
0
29 Apr 2021
Requirement Engineering Challenges for AI-intense Systems Development
Requirement Engineering Challenges for AI-intense Systems Development
Hans-Martin Heyn
E. Knauss
Amna Pir Muhammad
O. Eriksson
Jennifer Linder
P. Subbiah
S. K. Pradhan
Sagar Tungal
30
33
0
18 Mar 2021
A Review of Testing Object-Based Environment Perception for Safe
  Automated Driving
A Review of Testing Object-Based Environment Perception for Safe Automated Driving
Michael Hoss
Maike Scholtes
L. Eckstein
33
48
0
16 Feb 2021
A Review and Comparative Study on Probabilistic Object Detection in
  Autonomous Driving
A Review and Comparative Study on Probabilistic Object Detection in Autonomous Driving
Di Feng
Ali Harakeh
Steven Waslander
Klaus C. J. Dietmayer
AAML
UQCV
EDL
24
223
0
20 Nov 2020
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,145
0
06 Jun 2015
1