ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.00572
8
7

Enabling Verification of Deep Neural Networks in Perception Tasks Using Fuzzy Logic and Concept Embeddings

3 January 2022
Gesina Schwalbe
Christian Wirth
Ute Schmid
    AAML
ArXivPDFHTML
Abstract

One major drawback of deep convolutional neural networks (CNNs) for use in safety critical applications is their black-box nature. This makes it hard to verify or monitor complex, symbolic requirements on already trained computer vision CNNs. In this work, we present a simple, yet effective, approach to verify that a CNN complies with symbolic predicate logic rules which relate visual concepts. It is the first that (1) does not modify the CNN, (2) may use visual concepts that are no CNN in- or output feature, and (3) can leverage continuous CNN confidence outputs. To achieve this, we newly combine methods from explainable artificial intelligence and logic: First, using supervised concept embedding analysis, the output of a CNN is post-hoc enriched by concept outputs. Second, rules from prior knowledge are modelled as truth functions that accept the CNN outputs, and can be evaluated with little computational overhead. We here investigate the use of fuzzy logic, i.e., continuous truth values, and of proper output calibration, which both theoretically and practically show slight benefits. Applicability is demonstrated on state-of-the-art object detectors for three verification use-cases, where monitoring of rule breaches can reveal detection errors.

View on arXiv
Comments on this paper