Knowing which latent conditions lead to a particular outcome is useful for critically examining claims made about complex event outcomes. Identifying implied conditions and examining their influence on an outcome is challenging. We handle this by combining and augmenting annotations from two existing datasets consisting of goals and states, and explore the influence of conditions through our research questions and Condition-based Reasoning tasks. We examine open and closed LLMs of varying sizes and intent-alignment on our reasoning tasks and find that conditions are useful when not all context is available. Models differ widely in their ability to generate and identify outcome-variant conditions which affects their performance on outcome validation when conditions are used to replace missing context. Larger models like GPT-4o, are more cautious in such less constrained situations.
View on arXiv@article{vallurupalli2025_2506.01253, title={ CoRE: Condition-based Reasoning for Identifying Outcome Variance in Complex Events }, author={ Sai Vallurupalli and Francis Ferraro }, journal={arXiv preprint arXiv:2506.01253}, year={ 2025 } }