Privacy-Preserving Conformal Prediction Under Local Differential Privacy

Conformal prediction (CP) provides sets of candidate classes with a guaranteed probability of containing the true class. However, it typically relies on a calibration set with clean labels. We address privacy-sensitive scenarios where the aggregator is untrusted and can only access a perturbed version of the true labels. We propose two complementary approaches under local differential privacy (LDP). In the first approach, users do not access the model but instead provide their input features and a perturbed label using a k-ary randomized response. In the second approach, which enforces stricter privacy constraints, users add noise to their conformity score by binary search response. This method requires access to the classification model but preserves both data and label privacy. Both approaches compute the conformal threshold directly from noisy data without accessing the true labels. We prove finite-sample coverage guarantees and demonstrate robust coverage even under severe randomization. This approach unifies strong local privacy with predictive uncertainty control, making it well-suited for sensitive applications such as medical imaging or large language model queries, regardless of whether users can (or are willing to) compute their own scores.
View on arXiv@article{penso2025_2505.15721, title={ Privacy-Preserving Conformal Prediction Under Local Differential Privacy }, author={ Coby Penso and Bar Mahpud and Jacob Goldberger and Or Sheffet }, journal={arXiv preprint arXiv:2505.15721}, year={ 2025 } }