ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.01419
  4. Cited By
PSformer: Parameter-efficient Transformer with Segment Attention for Time Series Forecasting
v1v2 (latest)

PSformer: Parameter-efficient Transformer with Segment Attention for Time Series Forecasting

3 November 2024
Yanlong Wang
Jinfeng Xu
Fei Ma
Shao-Lun Huang
Danny Dongning Sun
Xiao-Ping Zhang
    AI4TS
ArXiv (abs)PDFHTML

Papers citing "PSformer: Parameter-efficient Transformer with Segment Attention for Time Series Forecasting"

29 / 29 papers shown
Title
FinTSBridge: A New Evaluation Suite for Real-world Financial Prediction with Advanced Time Series Models
FinTSBridge: A New Evaluation Suite for Real-world Financial Prediction with Advanced Time Series Models
Yanlong Wang
Jian Xu
Tiantian Gao
Hongkang Zhang
Shao-Lun Huang
Danny Dongning Sun
Xiao-Ping Zhang
AI4TS
96
0
0
10 Mar 2025
Foundation Models for Time Series Analysis: A Tutorial and Survey
Foundation Models for Time Series Analysis: A Tutorial and Survey
Yuxuan Liang
Haomin Wen
Yuqi Nie
Yushan Jiang
Ming Jin
Dongjin Song
Shirui Pan
Qingsong Wen
AI4TSAI4CE
132
144
0
21 Mar 2024
SAMformer: Unlocking the Potential of Transformers in Time Series
  Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention
SAMformer: Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention
Romain Ilbert
Ambroise Odonnat
Vasilii Feofanov
Aladin Virmaux
Giuseppe Paolo
Themis Palpanas
I. Redko
AI4TS
79
30
0
15 Feb 2024
MOMENT: A Family of Open Time-series Foundation Models
MOMENT: A Family of Open Time-series Foundation Models
Mononito Goswami
Konrad Szafer
Arjun Choudhry
Yifu Cai
Shuo Li
Artur Dubrawski
AIFinAI4TS
100
147
0
06 Feb 2024
Unified Training of Universal Time Series Forecasting Transformers
Unified Training of Universal Time Series Forecasting Transformers
Gerald Woo
Chenghao Liu
Akshat Kumar
Caiming Xiong
Silvio Savarese
Doyen Sahoo
AI4TS
161
195
0
04 Feb 2024
UniTime: A Language-Empowered Unified Model for Cross-Domain Time Series
  Forecasting
UniTime: A Language-Empowered Unified Model for Cross-Domain Time Series Forecasting
Xu Liu
Junfeng Hu
Yuan N. Li
Shizhe Diao
Yuxuan Liang
Bryan Hooi
Roger Zimmermann
AI4TS
80
86
0
15 Oct 2023
A decoder-only foundation model for time-series forecasting
A decoder-only foundation model for time-series forecasting
Abhimanyu Das
Weihao Kong
Rajat Sen
Yichen Zhou
AI4TSAI4CE
113
240
0
14 Oct 2023
iTransformer: Inverted Transformers Are Effective for Time Series
  Forecasting
iTransformer: Inverted Transformers Are Effective for Time Series Forecasting
Yong Liu
Tengge Hu
Haoran Zhang
Haixu Wu
Shiyu Wang
Lintao Ma
Mingsheng Long
AI4TS
78
528
0
10 Oct 2023
Time-LLM: Time Series Forecasting by Reprogramming Large Language Models
Time-LLM: Time Series Forecasting by Reprogramming Large Language Models
Ming Jin
Shiyu Wang
Lintao Ma
Zhixuan Chu
James Y. Zhang
...
Pin-Yu Chen
Yuxuan Liang
Yuan-Fang Li
Shirui Pan
Qingsong Wen
AI4TS
105
409
0
03 Oct 2023
LLM4TS: Aligning Pre-Trained LLMs as Data-Efficient Time-Series
  Forecasters
LLM4TS: Aligning Pre-Trained LLMs as Data-Efficient Time-Series Forecasters
Ching Chang
Wei-Yao Wang
Wenjie Peng
Tien-Fu Chen
AI4TS
86
56
0
16 Aug 2023
Revisiting Long-term Time Series Forecasting: An Investigation on Linear
  Mapping
Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping
Zhe Li
Shiyi Qi
Yiduo Li
Zenglin Xu
AI4TS
52
117
0
18 May 2023
TSMixer: An All-MLP Architecture for Time Series Forecasting
TSMixer: An All-MLP Architecture for Time Series Forecasting
Si-An Chen
Chun-Liang Li
Nate Yoder
Sercan O. Arik
Tomas Pfister
AI4TS
86
183
0
10 Mar 2023
One Fits All:Power General Time Series Analysis by Pretrained LM
One Fits All:Power General Time Series Analysis by Pretrained LM
Tian Zhou
Peisong Niu
Xue Wang
Liang Sun
Rong Jin
AI4TS
100
426
0
23 Feb 2023
A Time Series is Worth 64 Words: Long-term Forecasting with Transformers
A Time Series is Worth 64 Words: Long-term Forecasting with Transformers
Yuqi Nie
Nam H. Nguyen
Phanwadee Sinthong
Jayant Kalagnanam
AIFinAI4TS
87
1,396
0
27 Nov 2022
Deep Learning for Time Series Anomaly Detection: A Survey
Deep Learning for Time Series Anomaly Detection: A Survey
Zahra Zamanzadeh Darban
G. I. Webb
Shirui Pan
Charu C. Aggarwal
Mahsa Salehi
AI4TS
78
150
0
09 Nov 2022
TimesNet: Temporal 2D-Variation Modeling for General Time Series
  Analysis
TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis
Haixu Wu
Teng Hu
Yong Liu
Hang Zhou
Jianmin Wang
Mingsheng Long
AI4TSAIFin
137
809
0
05 Oct 2022
Are Transformers Effective for Time Series Forecasting?
Are Transformers Effective for Time Series Forecasting?
Ailing Zeng
Mu-Hwa Chen
L. Zhang
Qiang Xu
AI4TS
152
1,766
0
26 May 2022
Task Adaptive Parameter Sharing for Multi-Task Learning
Task Adaptive Parameter Sharing for Multi-Task Learning
Matthew Wallingford
Hao Li
Alessandro Achille
Avinash Ravichandran
Charless C. Fowlkes
Rahul Bhotika
Stefano Soatto
MoMe
69
62
0
30 Mar 2022
FEDformer: Frequency Enhanced Decomposed Transformer for Long-term
  Series Forecasting
FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting
Tian Zhou
Ziqing Ma
Qingsong Wen
Xue Wang
Liang Sun
Rong Jin
AI4TS
246
1,429
0
30 Jan 2022
Autoformer: Decomposition Transformers with Auto-Correlation for
  Long-Term Series Forecasting
Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting
Haixu Wu
Jiehui Xu
Jianmin Wang
Mingsheng Long
AI4TS
109
2,282
0
24 Jun 2021
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Ze Liu
Yutong Lin
Yue Cao
Han Hu
Yixuan Wei
Zheng Zhang
Stephen Lin
B. Guo
ViT
458
21,439
0
25 Mar 2021
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
664
41,369
0
22 Oct 2020
Sharpness-Aware Minimization for Efficiently Improving Generalization
Sharpness-Aware Minimization for Efficiently Improving Generalization
Pierre Foret
Ariel Kleiner
H. Mobahi
Behnam Neyshabur
AAML
192
1,350
0
03 Oct 2020
Reformer: The Efficient Transformer
Reformer: The Efficient Transformer
Nikita Kitaev
Lukasz Kaiser
Anselm Levskaya
VLM
197
2,327
0
13 Jan 2020
ALBERT: A Lite BERT for Self-supervised Learning of Language
  Representations
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Zhenzhong Lan
Mingda Chen
Sebastian Goodman
Kevin Gimpel
Piyush Sharma
Radu Soricut
SSLAIMat
371
6,463
0
26 Sep 2019
Sharing Attention Weights for Fast Transformer
Sharing Attention Weights for Fast Transformer
Tong Xiao
Yinqiao Li
Jingbo Zhu
Zhengtao Yu
Tongran Liu
57
52
0
26 Jun 2019
N-BEATS: Neural basis expansion analysis for interpretable time series
  forecasting
N-BEATS: Neural basis expansion analysis for interpretable time series forecasting
Boris N. Oreshkin
Dmitri Carpov
Nicolas Chapados
Yoshua Bengio
AI4TS
115
1,063
0
24 May 2019
An Empirical Evaluation of Generic Convolutional and Recurrent Networks
  for Sequence Modeling
An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
Shaojie Bai
J. Zico Kolter
V. Koltun
DRL
95
4,832
0
04 Mar 2018
DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks
DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks
David Salinas
Valentin Flunkert
Jan Gasthaus
AI4TSUQCVBDL
81
2,112
0
13 Apr 2017
1