Quantifying the Capability Boundary of DeepSeek Models: An Application-Driven Performance Analysis

DeepSeek-R1, known for its low training cost and exceptional reasoning capabilities, has achieved state-of-the-art performance on various benchmarks. However, detailed evaluations for DeepSeek Series models from the perspective of real-world applications are lacking, making it challenging for users to select the most suitable DeepSeek models for their specific needs. To address this gap, we presents the first comprehensive evaluation of the DeepSeek and its related models (including DeepSeek-V3, DeepSeek-R1, DeepSeek-R1-Distill-Qwen series, DeepSeek-R1-Distill-Llama series, their corresponding 4-bit quantized models, and the reasoning model QwQ-32B) using our enhanced A-Eval benchmark, A-Eval-2.0. Our systematic analysis reveals several key insights: (1) Given identical model architectures and training data, larger parameter models demonstrate superior performance, aligning with the scaling law. However, smaller models may achieve enhanced capabilities when employing optimized training strategies and higher-quality data; (2) Reasoning-enhanced model show significant performance gains in logical reasoning tasks but may underperform in text understanding and generation tasks; (3) As the data difficulty increases, distillation or reasoning enhancements yield higher performance gains for the models. Interestingly, reasoning enhancements can even have a negative impact on simpler problems; (4) Quantization impacts different capabilities unevenly, with significant drop on logical reasoning and minimal impact on text generation. Based on these results and findings, we design an model selection handbook enabling users to select the most cost-effective models without efforts.
View on arXiv@article{zhao2025_2502.11164, title={ Quantifying the Capability Boundary of DeepSeek Models: An Application-Driven Performance Analysis }, author={ Kaikai Zhao and Zhaoxiang Liu and Xuejiao Lei and Jiaojiao Zhao and Zhenhong Long and Zipeng Wang and Ning Wang and Meijuan An and Qingliang Meng and Peijun Yang and Minjie Hua and Chaoyang Ma and Wen Liu and Kai Wang and Shiguo Lian }, journal={arXiv preprint arXiv:2502.11164}, year={ 2025 } }