Aspect-Opinion Pair Extraction (AOPE) and Aspect Sentiment Triplet Extraction (ASTE) have drawn growing attention in NLP. However, most existing approaches extract aspects and opinions independently, optionally adding pairwise relations, often leading to error propagation and high time complexity. To address these challenges and being inspired by transition-based dependency parsing, we propose the first transition-based model for AOPE and ASTE that performs aspect and opinion extraction jointly, which also better captures position-aware aspect-opinion relations and mitigates entity-level bias. By integrating contrastive-augmented optimization, our model delivers more accurate action predictions and jointly optimizes separate subtasks in linear time. Extensive experiments on 4 commonly used ASTE/AOPE datasets show that, while performing worse when trained on a single dataset than some previous models, our model achieves the best performance on both ASTE and AOPE if trained on combined datasets, outperforming the strongest previous models in F1-measures (often by a large margin). We hypothesize that this is due to our model's ability to learn transition actions from multiple datasets and domains. Our code is available atthis https URL.
View on arXiv@article{hou2025_2412.00208, title={ Train Once for All: A Transitional Approach for Efficient Aspect Sentiment Triplet Extraction }, author={ Xinmeng Hou and Lingyue Fu and Chenhao Meng and Kounianhua Du and Wuqi Wang and Hai Hu }, journal={arXiv preprint arXiv:2412.00208}, year={ 2025 } }