ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.16000
42
0

How Private is Your Attention? Bridging Privacy with In-Context Learning

22 April 2025
Soham Bonnerjee
Zhen Wei
Yeon
Anna Asch
Sagnik Nandy
Promit Ghosal
ArXivPDFHTML
Abstract

In-context learning (ICL)-the ability of transformer-based models to perform new tasks from examples provided at inference time-has emerged as a hallmark of modern language models. While recent works have investigated the mechanisms underlying ICL, its feasibility under formal privacy constraints remains largely unexplored. In this paper, we propose a differentially private pretraining algorithm for linear attention heads and present the first theoretical analysis of the privacy-accuracy trade-off for ICL in linear regression. Our results characterize the fundamental tension between optimization and privacy-induced noise, formally capturing behaviors observed in private training via iterative methods. Additionally, we show that our method is robust to adversarial perturbations of training prompts, unlike standard ridge regression. All theoretical findings are supported by extensive simulations across diverse settings.

View on arXiv
@article{bonnerjee2025_2504.16000,
  title={ How Private is Your Attention? Bridging Privacy with In-Context Learning },
  author={ Soham Bonnerjee and Zhen Wei and Yeon and Anna Asch and Sagnik Nandy and Promit Ghosal },
  journal={arXiv preprint arXiv:2504.16000},
  year={ 2025 }
}
Comments on this paper