ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.02373
32
8

Operationalizing Contextual Integrity in Privacy-Conscious Assistants

5 August 2024
Sahra Ghalebikesabi
Eugene Bagdasaryan
Ren Yi
Itay Yona
Ilia Shumailov
Aneesh Pappu
Chongyang Shi
Laura Weidinger
Robert Stanforth
Leonard Berrada
Pushmeet Kohli
Po-Sen Huang
Borja Balle
ArXivPDFHTML
Abstract

Advanced AI assistants combine frontier LLMs and tool access to autonomously perform complex tasks on behalf of users. While the helpfulness of such assistants can increase dramatically with access to user information including emails and documents, this raises privacy concerns about assistants sharing inappropriate information with third parties without user supervision. To steer information-sharing assistants to behave in accordance with privacy expectations, we propose to operationalize contextual integrity (CI), a framework that equates privacy with the appropriate flow of information in a given context. In particular, we design and evaluate a number of strategies to steer assistants' information-sharing actions to be CI compliant. Our evaluation is based on a novel form filling benchmark composed of human annotations of common webform applications, and it reveals that prompting frontier LLMs to perform CI-based reasoning yields strong results.

View on arXiv
Comments on this paper