ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.06305
41
0

User Behavior Analysis in Privacy Protection with Large Language Models: A Study on Privacy Preferences with Limited Data

8 May 2025
Haowei Yang
Qingyi Lu
Yang Wang
Sibei Liu
Jiayun Zheng
Ao Xiang
    PILM
ArXivPDFHTML
Abstract

With the widespread application of large language models (LLMs), user privacy protection has become a significant research topic. Existing privacy preference modeling methods often rely on large-scale user data, making effective privacy preference analysis challenging in data-limited environments. This study explores how LLMs can analyze user behavior related to privacy protection in scenarios with limited data and proposes a method that integrates Few-shot Learning and Privacy Computing to model user privacy preferences. The research utilizes anonymized user privacy settings data, survey responses, and simulated data, comparing the performance of traditional modeling approaches with LLM-based methods. Experimental results demonstrate that, even with limited data, LLMs significantly improve the accuracy of privacy preference modeling. Additionally, incorporating Differential Privacy and Federated Learning further reduces the risk of user data exposure. The findings provide new insights into the application of LLMs in privacy protection and offer theoretical support for advancing privacy computing and user behavior analysis.

View on arXiv
@article{yang2025_2505.06305,
  title={ User Behavior Analysis in Privacy Protection with Large Language Models: A Study on Privacy Preferences with Limited Data },
  author={ Haowei Yang and Qingyi Lu and Yang Wang and Sibei Liu and Jiayun Zheng and Ao Xiang },
  journal={arXiv preprint arXiv:2505.06305},
  year={ 2025 }
}
Comments on this paper