ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.00638
26
2

A Collocation-based Method for Addressing Challenges in Word-level Metric Differential Privacy

30 June 2024
Stephen Meisenbacher
Maulik Chevli
Florian Matthes
ArXivPDFHTML
Abstract

Applications of Differential Privacy (DP) in NLP must distinguish between the syntactic level on which a proposed mechanism operates, often taking the form of word-level\textit{word-level}word-level or document-level\textit{document-level}document-level privatization. Recently, several word-level Metric\textit{Metric}Metric Differential Privacy approaches have been proposed, which rely on this generalized DP notion for operating in word embedding spaces. These approaches, however, often fail to produce semantically coherent textual outputs, and their application at the sentence- or document-level is only possible by a basic composition of word perturbations. In this work, we strive to address these challenges by operating between\textit{between}between the word and sentence levels, namely with collocations\textit{collocations}collocations. By perturbing n-grams rather than single words, we devise a method where composed privatized outputs have higher semantic coherence and variable length. This is accomplished by constructing an embedding model based on frequently occurring word groups, in which unigram words co-exist with bi- and trigram collocations. We evaluate our method in utility and privacy tests, which make a clear case for tokenization strategies beyond the word level.

View on arXiv
Comments on this paper