19
0

Capability Salience Vector: Fine-grained Alignment of Loss and Capabilities for Downstream Task Scaling Law

Main:8 Pages
9 Figures
Bibliography:3 Pages
4 Tables
Appendix:5 Pages
Abstract

Scaling law builds the relationship between training computation and validation loss, enabling researchers to effectively predict the loss trending of models across different levels of computation. However, a gap still remains between validation loss and the model's downstream capabilities, making it untrivial to apply scaling law to direct performance prediction for downstream tasks. The loss typically represents a cumulative penalty for predicted tokens, which are implicitly considered to have equal importance. Nevertheless, our studies have shown evidence that when considering different training data distributions, we cannot directly model the relationship between downstream capability and computation or token loss. To bridge the gap between validation loss and downstream task capabilities, in this work, we introduce Capability Salience Vector, which decomposes the overall loss and assigns different importance weights to tokens to assess a specific meta-capability, aligning the validation loss with downstream task performance in terms of the model's capabilities. Experiments on various popular benchmarks demonstrate that our proposed Capability Salience Vector could significantly improve the predictability of language model performance on downstream tasks.

View on arXiv
@article{ge2025_2506.13216,
  title={ Capability Salience Vector: Fine-grained Alignment of Loss and Capabilities for Downstream Task Scaling Law },
  author={ Qiming Ge and Shuhao Xing and Songyang Gao and Yunhua Zhou and Yicheng Zou and Songyang Zhang and Zhi Chen and Hang Yan and Qi Zhang and Qipeng Guo and Kai Chen },
  journal={arXiv preprint arXiv:2506.13216},
  year={ 2025 }
}
Comments on this paper