2
0

Fane at SemEval-2025 Task 10: Zero-Shot Entity Framing with Large Language Models

Enfa Fane
Mihai Surdeanu
Eduardo Blanco
Steven R. Corman
Abstract

Understanding how news narratives frame entities is crucial for studying media's impact on societal perceptions of events. In this paper, we evaluate the zero-shot capabilities of large language models (LLMs) in classifying framing roles. Through systematic experimentation, we assess the effects of input context, prompting strategies, and task decomposition. Our findings show that a hierarchical approach of first identifying broad roles and then fine-grained roles, outperforms single-step classification. We also demonstrate that optimal input contexts and prompts vary across task levels, highlighting the need for subtask-specific strategies. We achieve a Main Role Accuracy of 89.4% and an Exact Match Ratio of 34.5%, demonstrating the effectiveness of our approach. Our findings emphasize the importance of tailored prompt design and input context optimization for improving LLM performance in entity framing.

View on arXiv
@article{fane2025_2504.20469,
  title={ Fane at SemEval-2025 Task 10: Zero-Shot Entity Framing with Large Language Models },
  author={ Enfa Fane and Mihai Surdeanu and Eduardo Blanco and Steven R. Corman },
  journal={arXiv preprint arXiv:2504.20469},
  year={ 2025 }
}
Comments on this paper