ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.12333
24
0

Meta-Evaluating Local LLMs: Rethinking Performance Metrics for Serious Games

13 April 2025
Andrés Isaza-Giraldo
Paulo Bala
Lucas Pereira
ArXivPDFHTML
Abstract

The evaluation of open-ended responses in serious games presents a unique challenge, as correctness is often subjective. Large Language Models (LLMs) are increasingly being explored as evaluators in such contexts, yet their accuracy and consistency remain uncertain, particularly for smaller models intended for local execution. This study investigates the reliability of five small-scale LLMs when assessing player responses in \textit{En-join}, a game that simulates decision-making within energy communities. By leveraging traditional binary classification metrics (including accuracy, true positive rate, and true negative rate), we systematically compare these models across different evaluation scenarios. Our results highlight the strengths and limitations of each model, revealing trade-offs between sensitivity, specificity, and overall performance. We demonstrate that while some models excel at identifying correct responses, others struggle with false positives or inconsistent evaluations. The findings highlight the need for context-aware evaluation frameworks and careful model selection when deploying LLMs as evaluators. This work contributes to the broader discourse on the trustworthiness of AI-driven assessment tools, offering insights into how different LLM architectures handle subjective evaluation tasks.

View on arXiv
@article{isaza-giraldo2025_2504.12333,
  title={ Meta-Evaluating Local LLMs: Rethinking Performance Metrics for Serious Games },
  author={ Andrés Isaza-Giraldo and Paulo Bala and Lucas Pereira },
  journal={arXiv preprint arXiv:2504.12333},
  year={ 2025 }
}
Comments on this paper