ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19912
47
0
v1v2 (latest)

APE: Selective Fine-tuning with Acceptance Criteria for Language Model Adaptation

26 May 2025
Javier Marín
ArXiv (abs)PDFHTML
Main:6 Pages
2 Figures
Bibliography:2 Pages
6 Tables
Abstract

We present Adjacent Possible Exploration (APE), a selective fine-tuning method for adapting large language models that systematically explores parameter modifications while maintaining model stability. Inspired by evolutionary optimization principles, APE evaluates multiple candidate parameter updates through fine-tuning on small data subsets and accepts only those exceeding a performance threshold. Unlike standard fine-tuning that follows single gradient directions, APE implements a filtered selection process that prevents destabilizing parameter changes while enabling systematic improvement. Our method achieves 33.9\% BLEU improvement and 36.2\% perplexity reduction on news summarization tasks while using minimal computational resources. The approach provides a practical framework for controlled model adaptation that balances performance gains with representational stability.

View on arXiv
@article{marín2025_2505.19912,
  title={ APE: Selective Fine-tuning with Acceptance Criteria for Language Model Adaptation },
  author={ Javier Marín },
  journal={arXiv preprint arXiv:2505.19912},
  year={ 2025 }
}
Comments on this paper