40
0

LTG at SemEval-2025 Task 10: Optimizing Context for Classification of Narrative Roles

Main:4 Pages
3 Figures
Bibliography:2 Pages
3 Tables
Appendix:2 Pages
Abstract

Our contribution to the SemEval 2025 shared task 10, subtask 1 on entity framing, tackles the challenge of providing the necessary segments from longer documents as context for classification with a masked language model. We show that a simple entity-oriented heuristics for context selection can enable text classification using models with limited context window. Our context selection approach and the XLM-RoBERTa language model is on par with, or outperforms, Supervised Fine-Tuning with larger generative language models.

View on arXiv
@article{rønningstad2025_2506.05976,
  title={ LTG at SemEval-2025 Task 10: Optimizing Context for Classification of Narrative Roles },
  author={ Egil Rønningstad and Gaurav Negi },
  journal={arXiv preprint arXiv:2506.05976},
  year={ 2025 }
}
Comments on this paper