ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.05302
  4. Cited By
Trust in AI: Interpretability is not necessary or sufficient, while
  black-box interaction is necessary and sufficient

Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient

10 February 2022
Max W. Shen
ArXiv (abs)PDFHTML

Papers citing "Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient"

13 / 13 papers shown
Title
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
M. Zarlenga
Gabriele Dominici
Pietro Barbiero
Z. Shams
M. Jamnik
KELM
488
0
0
24 Apr 2025
Applications of Generative AI (GAI) for Mobile and Wireless Networking:
  A Survey
Applications of Generative AI (GAI) for Mobile and Wireless Networking: A Survey
Thai-Hoc Vu
Senthil Kumar Jagatheesaperumal
Minh-Duong Nguyen
Nguyen Van Huynh
Sunghwan Kim
Quoc-Viet Pham
94
13
0
30 May 2024
Understanding Inter-Concept Relationships in Concept-Based Models
Understanding Inter-Concept Relationships in Concept-Based Models
Naveen Raman
M. Zarlenga
M. Jamnik
95
5
0
28 May 2024
On the Relationship Between Interpretability and Explainability in
  Machine Learning
On the Relationship Between Interpretability and Explainability in Machine Learning
Benjamin Leblanc
Pascal Germain
FaML
120
0
0
20 Nov 2023
A Framework for Interpretability in Machine Learning for Medical Imaging
A Framework for Interpretability in Machine Learning for Medical Imaging
Alan Q. Wang
Batuhan K. Karaman
Heejong Kim
Jacob Rosenthal
Rachit Saluja
Sean I. Young
M. Sabuncu
AI4CE
134
13
0
02 Oct 2023
SHARCS: Shared Concept Space for Explainable Multimodal Learning
SHARCS: Shared Concept Space for Explainable Multimodal Learning
Gabriele Dominici
Pietro Barbiero
Lucie Charlotte Magister
Pietro Lio
Nikola Simidjievski
82
6
0
01 Jul 2023
Interpretable Neural-Symbolic Concept Reasoning
Interpretable Neural-Symbolic Concept Reasoning
Pietro Barbiero
Gabriele Ciravegna
Francesco Giannini
M. Zarlenga
Lucie Charlotte Magister
Alberto Tonda
Pietro Lio
F. Precioso
M. Jamnik
G. Marra
NAILRM
145
41
0
27 Apr 2023
Combining Stochastic Explainers and Subgraph Neural Networks can
  Increase Expressivity and Interpretability
Combining Stochastic Explainers and Subgraph Neural Networks can Increase Expressivity and Interpretability
Indro Spinelli
Michele Guerra
F. Bianchi
Simone Scardapane
76
0
0
14 Apr 2023
A.I. Robustness: a Human-Centered Perspective on Technological
  Challenges and Opportunities
A.I. Robustness: a Human-Centered Perspective on Technological Challenges and Opportunities
Andrea Tocchetti
Lorenzo Corti
Agathe Balayn
Mireia Yurrita
Philip Lippmann
Marco Brambilla
Jie Yang
87
14
0
17 Oct 2022
Requirements Engineering for Machine Learning: A Review and Reflection
Requirements Engineering for Machine Learning: A Review and Reflection
Zhong Pei
Lin Liu
Chen Wang
Jianmin Wang
VLM
78
24
0
03 Oct 2022
Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off
Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off
M. Zarlenga
Pietro Barbiero
Gabriele Ciravegna
G. Marra
Francesco Giannini
...
F. Precioso
S. Melacci
Adrian Weller
Pietro Lio
M. Jamnik
146
59
0
19 Sep 2022
Encoding Concepts in Graph Neural Networks
Encoding Concepts in Graph Neural Networks
Lucie Charlotte Magister
Pietro Barbiero
Dmitry Kazhdan
F. Siciliano
Gabriele Ciravegna
Fabrizio Silvestri
M. Jamnik
Pietro Lio
84
21
0
27 Jul 2022
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.3K
17,225
0
16 Feb 2016
1