ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.00887
28
20

Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise

2 November 2022
Jhih-Cing Huang
Yu-Lin Tsai
Chao-Han Huck Yang
Cheng-Fang Su
Chia-Mu Yu
Pin-Yu Chen
Sy-Yen Kuo
    AAML
ArXivPDFHTML
Abstract

Recently, quantum classifiers have been found to be vulnerable to adversarial attacks, in which quantum classifiers are deceived by imperceptible noises, leading to misclassification. In this paper, we propose the first theoretical study demonstrating that adding quantum random rotation noise can improve robustness in quantum classifiers against adversarial attacks. We link the definition of differential privacy and show that the quantum classifier trained with the natural presence of additive noise is differentially private. Finally, we derive a certified robustness bound to enable quantum classifiers to defend against adversarial examples, supported by experimental results simulated with noises from IBM's 7-qubits device.

View on arXiv
Comments on this paper