ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.11421
14
19

On Specifying for Trustworthiness

22 June 2022
Dhaminda B. Abeywickrama
A. Bennaceur
Greg Chance
Y. Demiris
Anastasia Kordoni
Mark Levine
Luke Moffat
Luc Moreau
M. Mousavi
B. Nuseibeh
S. Ramamoorthy
Jan Oliver Ringert
James Wilson
Shane Windsor
Kerstin Eder
ArXivPDFHTML
Abstract

As autonomous systems (AS) increasingly become part of our daily lives, ensuring their trustworthiness is crucial. In order to demonstrate the trustworthiness of an AS, we first need to specify what is required for an AS to be considered trustworthy. This roadmap paper identifies key challenges for specifying for trustworthiness in AS, as identified during the "Specifying for Trustworthiness" workshop held as part of the UK Research and Innovation (UKRI) Trustworthy Autonomous Systems (TAS) programme. We look across a range of AS domains with consideration of the resilience, trust, functionality, verifiability, security, and governance and regulation of AS and identify some of the key specification challenges in these domains. We then highlight the intellectual challenges that are involved with specifying for trustworthiness in AS that cut across domains and are exacerbated by the inherent uncertainty involved with the environments in which AS need to operate.

View on arXiv
Comments on this paper