41
0

Data Doping or True Intelligence? Evaluating the Transferability of Injected Knowledge in LLMs

Main:3 Pages
2 Figures
Bibliography:2 Pages
5 Tables
Appendix:3 Pages
Abstract

As the knowledge of large language models (LLMs) becomes outdated over time, there is a growing need for efficient methods to update them, especially when injecting proprietary information. Our study reveals that comprehension-intensive fine-tuning tasks (e.g., question answering and blanks) achieve substantially higher knowledge retention rates (48%) compared to mapping-oriented tasks like translation (17%) or text-to-JSON conversion (20%), despite exposure to identical factual content. We demonstrate that this pattern persists across model architectures and follows scaling laws, with larger models showing improved retention across all task types. However, all models exhibit significant performance drops when applying injected knowledge in broader contexts, suggesting limited semantic integration. These findings show the importance of task selection in updating LLM knowledge, showing that effective knowledge injection relies not just on data exposure but on the depth of cognitive engagement during fine-tuning.

View on arXiv
@article{jan2025_2505.17140,
  title={ Data Doping or True Intelligence? Evaluating the Transferability of Injected Knowledge in LLMs },
  author={ Essa Jan and Moiz Ali and Muhammad Saram Hassan and Fareed Zaffar and Yasir Zaki },
  journal={arXiv preprint arXiv:2505.17140},
  year={ 2025 }
}
Comments on this paper