17
0

Logo-LLM: Local and Global Modeling with Large Language Models for Time Series Forecasting

Abstract

Time series forecasting is critical across multiple domains, where time series data exhibits both local patterns and global dependencies. While Transformer-based methods effectively capture global dependencies, they often overlook short-term local variations in time series. Recent methods that adapt large language models (LLMs) into time series forecasting inherit this limitation by treating LLMs as black-box encoders, relying solely on the final-layer output and underutilizing hierarchical representations. To address this limitation, we propose Logo-LLM, a novel LLM-based framework that explicitly extracts and models multi-scale temporal features from different layers of a pre-trained LLM. Through empirical analysis, we show that shallow layers of LLMs capture local dynamics in time series, while deeper layers encode global trends. Moreover, Logo-LLM introduces lightweight Local-Mixer and Global-Mixer modules to align and integrate features with the temporal input across layers. Extensive experiments demonstrate that Logo-LLM achieves superior performance across diverse benchmarks, with strong generalization in few-shot and zero-shot settings while maintaining low computational overhead.

View on arXiv
@article{ou2025_2505.11017,
  title={ Logo-LLM: Local and Global Modeling with Large Language Models for Time Series Forecasting },
  author={ Wenjie Ou and Zhishuo Zhao and Dongyue Guo and Yi Lin },
  journal={arXiv preprint arXiv:2505.11017},
  year={ 2025 }
}
Comments on this paper