ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.09672
47
0

Is Fine-Tuning an Effective Solution? Reassessing Knowledge Editing for Unstructured Data

11 June 2025
Hao Xiong
Chuanyuan Tan
Wenliang Chen
    KELM
ArXiv (abs)PDFHTML
Main:7 Pages
3 Figures
Bibliography:2 Pages
8 Tables
Appendix:4 Pages
Abstract

Unstructured Knowledge Editing (UKE) is crucial for updating the relevant knowledge of large language models (LLMs). It focuses on unstructured inputs, such as long or free-form texts, which are common forms of real-world knowledge. Although previous studies have proposed effective methods and tested them, some issues exist: (1) Lack of Locality evaluation for UKE, and (2) Abnormal failure of fine-tuning (FT) based methods for UKE. To address these issues, we first construct two datasets, UnKEBench-Loc and AKEW-Loc (CF), by extending two existing UKE datasets with locality test data from the unstructured and structured views. This enables a systematic evaluation of the Locality of post-edited models. Furthermore, we identify four factors that may affect the performance of FT-based methods. Based on these factors, we conduct experiments to determine how the well-performing FT-based methods should be trained for the UKE task, providing a training recipe for future research. Our experimental results indicate that the FT-based method with the optimal setting (FT-UKE) is surprisingly strong, outperforming the existing state-of-the-art (SOTA). In batch editing scenarios, FT-UKE shows strong performance as well, with its advantage over SOTA methods increasing as the batch size grows, expanding the average metric lead from +6.78% to +10.80%

View on arXiv
@article{xiong2025_2506.09672,
  title={ Is Fine-Tuning an Effective Solution? Reassessing Knowledge Editing for Unstructured Data },
  author={ Hao Xiong and Chuanyuan Tan and Wenliang Chen },
  journal={arXiv preprint arXiv:2506.09672},
  year={ 2025 }
}
Comments on this paper