ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.11521
77
1

Preempting Text Sanitization Utility in Resource-Constrained Privacy-Preserving LLM Interactions

18 November 2024
Robin Carpentier
B. Zhao
Hassan Jameel Asghar
Dali Kaafar
ArXivPDFHTML
Abstract

Interactions with online Large Language Models raise privacy issues where providers can gather sensitive information about users and their companies from the prompts. While Differential Privacy can be applied on textual prompts through the Multidimensional Laplace Mechanism, we show that it is difficult to anticipate the utility of such sanitized prompt. Poor utility has clear monetary consequences for LLM services charging on a pay-per-use model as well as great amount of computing resources wasted. To this end, we propose an architecture to predict the utility of a given sanitized prompt before it is sent to the LLM. We experimentally show that our architecture helps prevent such resource waste for up to 12% of the prompts. We also reproduce experiments from one of the most cited paper on distance-based DP for text sanitization and show that a potential performance-driven implementation choice completely changes the output while not being explicitly defined in the paper.

View on arXiv
@article{carpentier2025_2411.11521,
  title={ Preempting Text Sanitization Utility in Resource-Constrained Privacy-Preserving LLM Interactions },
  author={ Robin Carpentier and Benjamin Zi Hao Zhao and Hassan Jameel Asghar and Dali Kaafar },
  journal={arXiv preprint arXiv:2411.11521},
  year={ 2025 }
}
Comments on this paper