ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.07998
12
0

Generative Modeling of Weights: Generalization or Memorization?

9 June 2025
Boya Zeng
Yida Yin
Zhiqiu Xu
Zhuang Liu
    DiffM
ArXiv (abs)PDFHTML
Main:9 Pages
24 Figures
Bibliography:4 Pages
8 Tables
Appendix:16 Pages
Abstract

Generative models, with their success in image and video generation, have recently been explored for synthesizing effective neural network weights. These approaches take trained neural network checkpoints as training data, and aim to generate high-performing neural network weights during inference. In this work, we examine four representative methods on their ability to generate novel model weights, i.e., weights that are different from the checkpoints seen during training. Surprisingly, we find that these methods synthesize weights largely by memorization: they produce either replicas, or at best simple interpolations, of the training checkpoints. Current methods fail to outperform simple baselines, such as adding noise to the weights or taking a simple weight ensemble, in obtaining different and simultaneously high-performing models. We further show that this memorization cannot be effectively mitigated by modifying modeling factors commonly associated with memorization in image diffusion models, or applying data augmentations. Our findings provide a realistic assessment of what types of data current generative models can model, and highlight the need for more careful evaluation of generative models in new domains. Our code is available atthis https URL.

View on arXiv
@article{zeng2025_2506.07998,
  title={ Generative Modeling of Weights: Generalization or Memorization? },
  author={ Boya Zeng and Yida Yin and Zhiqiu Xu and Zhuang Liu },
  journal={arXiv preprint arXiv:2506.07998},
  year={ 2025 }
}
Comments on this paper