ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.02558
8
2

Pre-trained language models as knowledge bases for Automotive Complaint Analysis

4 December 2020
V. D. Viellieber
Matthias Aßenmacher
ArXivPDFHTML
Abstract

Recently it has been shown that large pre-trained language models like BERT (Devlin et al., 2018) are able to store commonsense factual knowledge captured in its pre-training corpus (Petroni et al., 2019). In our work we further evaluate this ability with respect to an application from industry creating a set of probes specifically designed to reveal technical quality issues captured as described incidents out of unstructured customer feedback in the automotive industry. After probing the out-of-the-box versions of the pre-trained models with fill-in-the-mask tasks we dynamically provide it with more knowledge via continual pre-training on the Office of Defects Investigation (ODI) Complaints data set. In our experiments the models exhibit performance regarding queries on domain-specific topics compared to when queried on factual knowledge itself, as Petroni et al. (2019) have done. For most of the evaluated architectures the correct token is predicted with a Precision@1Precision@1Precision@1 (P@1P@1P@1) of above 60\%, while for P@5P@5P@5 and P@10P@10P@10 even values of well above 80\% and up to 90\% respectively are reached. These results show the potential of using language models as a knowledge base for structured analysis of customer feedback.

View on arXiv
Comments on this paper