ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21570
  4. Cited By
Beyond Explainability: The Case for AI Validation

Beyond Explainability: The Case for AI Validation

27 May 2025
Dalit Ken-Dror Feldman
Daniel Benoliel
ArXiv (abs)PDFHTML

Papers citing "Beyond Explainability: The Case for AI Validation"

10 / 10 papers shown
Title
Beware of "Explanations" of AI
Beware of "Explanations" of AI
David Martens
Galit Shmueli
Theodoros Evgeniou
Kevin Bauer
Christian Janiesch
...
Claudia Perlich
Wouter Verbeke
Alona Zharova
Patrick Zschech
F. Provost
107
1
0
09 Apr 2025
Monitoring Reasoning Models for Misbehavior and the Risks of Promoting Obfuscation
Monitoring Reasoning Models for Misbehavior and the Risks of Promoting Obfuscation
Bowen Baker
Joost Huizinga
Leo Gao
Zehao Dou
M. Guan
Aleksander Mądry
Wojciech Zaremba
J. Pachocki
David Farhi
LRM
172
38
0
14 Mar 2025
Engineering Trustworthy AI: A Developer Guide for Empirical Risk
  Minimization
Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization
Diana Pfau
Alexander Jung
51
1
0
25 Oct 2024
A Blueprint for Auditing Generative AI
A Blueprint for Auditing Generative AI
Jakob Mökander
Justin Curl
Mihir A. Kshirsagar
28
1
0
07 Jul 2024
The Case Against Explainability
The Case Against Explainability
Hofit Wasserman Rozen
N. Elkin-Koren
Ran Gilad-Bachrach
AILawELM
56
2
0
20 May 2023
Painting the black box white: experimental findings from applying XAI to
  an ECG reading setting
Painting the black box white: experimental findings from applying XAI to an ECG reading setting
F. Cabitza
M. Cameli
Andrea Campagner
Chiara Natali
Luca Ronzio
76
11
0
27 Oct 2022
AI Failures: A Review of Underlying Issues
AI Failures: A Review of Underlying Issues
D. Banerjee
S. S. Chanda
40
8
0
18 Jul 2020
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation
  Methods
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAttAAMLMLAU
81
822
0
06 Nov 2019
Deep Neural Networks Improve Radiologists' Performance in Breast Cancer
  Screening
Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening
Nan Wu
Jason Phang
Jungkyu Park
Yiqiu Shen
Zhe Huang
...
S. G. Kim
Laura Heacock
Linda Moy
Kyunghyun Cho
Krzysztof J. Geras
MedIm
49
501
0
20 Mar 2019
Manipulating and Measuring Model Interpretability
Manipulating and Measuring Model Interpretability
Forough Poursabzi-Sangdeh
D. Goldstein
Jake M. Hofman
Jennifer Wortman Vaughan
Hanna M. Wallach
99
701
0
21 Feb 2018
1