ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17550
154
0

T2VUnlearning: A Concept Erasing Method for Text-to-Video Diffusion Models

23 May 2025
Xiaoyu Ye
Songjie Cheng
Yongtao Wang
Yajiao Xiong
Yishen Li
    DiffM
ArXivPDFHTML
Abstract

Recent advances in text-to-video (T2V) diffusion models have significantly enhanced the quality of generated videos. However, their ability to produce explicit or harmful content raises concerns about misuse and potential rights violations. Inspired by the success of unlearning techniques in erasing undesirable concepts from text-to-image (T2I) models, we extend unlearning to T2V models and propose a robust and precise unlearning method. Specifically, we adopt negatively-guided velocity prediction fine-tuning and enhance it with prompt augmentation to ensure robustness against LLM-refined prompts. To achieve precise unlearning, we incorporate a localization and a preservation regularization to preserve the model's ability to generate non-target concepts. Extensive experiments demonstrate that our method effectively erases a specific concept while preserving the model's generation capability for all other concepts, outperforming existing methods. We provide the unlearned models in \href{this https URL}{this https URL}.

View on arXiv
@article{ye2025_2505.17550,
  title={ T2VUnlearning: A Concept Erasing Method for Text-to-Video Diffusion Models },
  author={ Xiaoyu Ye and Songjie Cheng and Yongtao Wang and Yajiao Xiong and Yishen Li },
  journal={arXiv preprint arXiv:2505.17550},
  year={ 2025 }
}
Comments on this paper