ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.03773
  4. Cited By
ExpProof : Operationalizing Explanations for Confidential Models with ZKPs

ExpProof : Operationalizing Explanations for Confidential Models with ZKPs

6 February 2025
Chhavi Yadav
Evan Monroe Laufer
Dan Boneh
Kamalika Chaudhuri
ArXivPDFHTML

Papers citing "ExpProof : Operationalizing Explanations for Confidential Models with ZKPs"

27 / 27 papers shown
Title
Auditing Local Explanations is Hard
Auditing Local Explanations is Hard
Robi Bhattacharjee
U. V. Luxburg
LRM
MLAU
FAtt
68
2
0
18 Jul 2024
zkLLM: Zero Knowledge Proofs for Large Language Models
zkLLM: Zero Knowledge Proofs for Large Language Models
Haochen Sun
Jason Li
Hongyang Zhang
ALM
66
26
0
24 Apr 2024
Trustless Audits without Revealing Data or Models
Trustless Audits without Revealing Data or Models
Suppakit Waiwitlikhit
Ion Stoica
Yi Sun
Tatsunori Hashimoto
Daniel Kang
MLAU
29
10
0
06 Apr 2024
FairProof : Confidential and Certifiable Fairness for Neural Networks
FairProof : Confidential and Certifiable Fairness for Neural Networks
Chhavi Yadav
A. Chowdhury
Dan Boneh
Kamalika Chaudhuri
MLAU
75
8
0
19 Feb 2024
Verifiable Fairness: Privacy-preserving Computation of Fairness for
  Machine Learning Systems
Verifiable Fairness: Privacy-preserving Computation of Fairness for Machine Learning Systems
Ehsan Toreini
M. Mehrnezhad
Aad van Moorsel
33
5
0
12 Sep 2023
Keeping Up with the Language Models: Robustness-Bias Interplay in NLI
  Data and Models
Keeping Up with the Language Models: Robustness-Bias Interplay in NLI Data and Models
Ioana Baldini
Chhavi Yadav
Payel Das
Kush R. Varshney
MLAU
50
3
0
22 May 2023
Scaling up Trustless DNN Inference with Zero-Knowledge Proofs
Scaling up Trustless DNN Inference with Zero-Knowledge Proofs
Daniel Kang
Tatsunori Hashimoto
Ion Stoica
Yi Sun
LRM
21
41
0
17 Oct 2022
Active Fairness Auditing
Active Fairness Auditing
Tom Yan
Chicheng Zhang
FaML
114
25
0
16 Jun 2022
XAudit : A Theoretical Look at Auditing with Explanations
XAudit : A Theoretical Look at Auditing with Explanations
Chhavi Yadav
Michal Moshkovitz
Kamalika Chaudhuri
XAI
FAtt
MLAU
50
5
0
09 Jun 2022
Framework for Evaluating Faithfulness of Local Explanations
Framework for Evaluating Faithfulness of Local Explanations
S. Dasgupta
Nave Frost
Michal Moshkovitz
FAtt
175
62
0
01 Feb 2022
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial
  Contexts
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts
Sebastian Bordt
Michèle Finck
Eric Raidl
U. V. Luxburg
AILaw
77
78
0
25 Jan 2022
pvCNN: Privacy-Preserving and Verifiable Convolutional Neural Network
  Testing
pvCNN: Privacy-Preserving and Verifiable Convolutional Neural Network Testing
Jiasi Weng
Jian Weng
Gui Tang
Anjia Yang
Ming Li
Jia-Nan Liu
33
33
0
23 Jan 2022
Fairness, Integrity, and Privacy in a Scalable Blockchain-based
  Federated Learning System
Fairness, Integrity, and Privacy in a Scalable Blockchain-based Federated Learning System
Timon Rückel
Johannes Sedlmeir
Peter Hofmann
FedML
54
58
0
11 Nov 2021
Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
Q. V. Liao
R. Varshney
40
227
0
20 Oct 2021
Counterfactual Explanations Can Be Manipulated
Counterfactual Explanations Can Be Manipulated
Dylan Slack
Sophie Hilgard
Himabindu Lakkaraju
Sameer Singh
53
136
0
04 Jun 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
90
418
0
15 Feb 2021
A survey of algorithmic recourse: definitions, formulations, solutions,
  and prospects
A survey of algorithmic recourse: definitions, formulations, solutions, and prospects
Amir-Hossein Karimi
Gilles Barthe
Bernhard Schölkopf
Isabel Valera
FaML
50
172
0
08 Oct 2020
Looking Deeper into Tabular LIME
Looking Deeper into Tabular LIME
Damien Garreau
U. V. Luxburg
FAtt
LMTD
144
30
0
25 Aug 2020
Explaining the Explainer: A First Theoretical Analysis of LIME
Explaining the Explainer: A First Theoretical Analysis of LIME
Damien Garreau
U. V. Luxburg
FAtt
32
175
0
10 Jan 2020
PyTorch: An Imperative Style, High-Performance Deep Learning Library
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
265
42,038
0
03 Dec 2019
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation
  Methods
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAtt
AAML
MLAU
59
813
0
06 Nov 2019
Provable Certificates for Adversarial Examples: Fitting a Ball in the
  Union of Polytopes
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes
Matt Jordan
Justin Lewis
A. Dimakis
AAML
60
57
0
20 Mar 2019
Fairwashing: the risk of rationalization
Fairwashing: the risk of rationalization
Ulrich Aïvodji
Hiromi Arai
O. Fortineau
Sébastien Gambs
Satoshi Hara
Alain Tapp
FaML
33
146
0
28 Jan 2019
Defining Locality for Surrogates in Post-hoc Interpretablity
Defining Locality for Surrogates in Post-hoc Interpretablity
Thibault Laugel
X. Renard
Marie-Jeanne Lesot
Christophe Marsala
Marcin Detyniecki
FAtt
76
80
0
19 Jun 2018
Inverse Classification for Comparison-based Interpretability in Machine
  Learning
Inverse Classification for Comparison-based Interpretability in Machine Learning
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
X. Renard
Marcin Detyniecki
108
100
0
22 Dec 2017
Counterfactual Explanations without Opening the Black Box: Automated
  Decisions and the GDPR
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
81
2,332
0
01 Nov 2017
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
587
16,828
0
16 Feb 2016
1