ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.15121
  4. Cited By
Aligning XAI with EU Regulations for Smart Biomedical Devices: A
  Methodology for Compliance Analysis

Aligning XAI with EU Regulations for Smart Biomedical Devices: A Methodology for Compliance Analysis

27 August 2024
Francesco Sovrano
Michaël Lognoul
Giulia Vilone
ArXiv (abs)PDFHTML

Papers citing "Aligning XAI with EU Regulations for Smart Biomedical Devices: A Methodology for Compliance Analysis"

9 / 9 papers shown
Title
A Survey on Methods and Metrics for the Assessment of Explainability
  under the Proposed AI Act
A Survey on Methods and Metrics for the Assessment of Explainability under the Proposed AI Act
Francesco Sovrano
Salvatore Sapienza
M. Palmirani
F. Vitali
43
18
0
21 Oct 2021
Brain Co-Processors: Using AI to Restore and Augment Brain Function
Brain Co-Processors: Using AI to Restore and Augment Brain Function
Rajesh P. N. Rao
44
6
0
06 Dec 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
127
716
0
08 Jan 2020
Boolean Decision Rules via Column Generation
Boolean Decision Rules via Column Generation
S. Dash
Oktay Gunluk
Dennis L. Wei
67
174
0
24 May 2018
Explanations based on the Missing: Towards Contrastive Explanations with
  Pertinent Negatives
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
Amit Dhurandhar
Pin-Yu Chen
Ronny Luss
Chun-Chen Tu
Pai-Shun Ting
Karthikeyan Shanmugam
Payel Das
FAtt
124
589
0
21 Feb 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
217
1,842
0
30 Nov 2017
Interpretability via Model Extraction
Interpretability via Model Extraction
Osbert Bastani
Carolyn Kim
Hamsa Bastani
FAtt
55
129
0
29 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,939
0
22 May 2017
Model-Agnostic Interpretability of Machine Learning
Model-Agnostic Interpretability of Machine Learning
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
86
838
0
16 Jun 2016
1