7
0

Improving QA Efficiency with DistilBERT: Fine-Tuning and Inference on mobile Intel CPUs

Main:7 Pages
3 Figures
Bibliography:2 Pages
3 Tables
Abstract

This study presents an efficient transformer-based question-answering (QA) model optimized for deployment on a 13th Gen Intel i7-1355U CPU, using the Stanford Question Answering Dataset (SQuAD) v1.1. Leveraging exploratory data analysis, data augmentation, and fine-tuning of a DistilBERT architecture, the model achieves a validation F1 score of 0.6536 with an average inference time of 0.1208 seconds per question. Compared to a rule-based baseline (F1: 0.3124) and full BERT-based models, our approach offers a favorable trade-off between accuracy and computational efficiency. This makes it well-suited for real-time applications on resource-constrained systems. The study includes systematic evaluation of data augmentation strategies and hyperparameter configurations, providing practical insights into optimizing transformer models for CPU-based inference.

View on arXiv
@article{yinkfu2025_2505.22937,
  title={ Improving QA Efficiency with DistilBERT: Fine-Tuning and Inference on mobile Intel CPUs },
  author={ Ngeyen Yinkfu },
  journal={arXiv preprint arXiv:2505.22937},
  year={ 2025 }
}
Comments on this paper