5
0

POCO: Scalable Neural Forecasting through Population Conditioning

Yu Duan
Hamza Tahir Chaudhry
Misha B. Ahrens
Christopher D Harvey
Matthew G Perich
Karl Deisseroth
Kanaka Rajan
Main:9 Pages
21 Figures
Bibliography:4 Pages
10 Tables
Appendix:26 Pages
Abstract

Predicting future neural activity is a core challenge in modeling brain dynamics, with applications ranging from scientific investigation to closed-loop neurotechnology. While recent models of population activity emphasize interpretability and behavioral decoding, neural forecasting-particularly across multi-session, spontaneous recordings-remains underexplored. We introduce POCO, a unified forecasting model that combines a lightweight univariate forecaster with a population-level encoder to capture both neuron-specific and brain-wide dynamics. Trained across five calcium imaging datasets spanning zebrafish, mice, and C. elegans, POCO achieves state-of-the-art accuracy at cellular resolution in spontaneous behaviors. After pre-training, POCO rapidly adapts to new recordings with minimal fine-tuning. Notably, POCO's learned unit embeddings recover biologically meaningful structure-such as brain region clustering-without any anatomical labels. Our comprehensive analysis reveals several key factors influencing performance, including context length, session diversity, and preprocessing. Together, these results position POCO as a scalable and adaptable approach for cross-session neural forecasting and offer actionable insights for future model design. By enabling accurate, generalizable forecasting models of neural dynamics across individuals and species, POCO lays the groundwork for adaptive neurotechnologies and large-scale efforts for neural foundation models.

View on arXiv
@article{duan2025_2506.14957,
  title={ POCO: Scalable Neural Forecasting through Population Conditioning },
  author={ Yu Duan and Hamza Tahir Chaudhry and Misha B. Ahrens and Christopher D Harvey and Matthew G Perich and Karl Deisseroth and Kanaka Rajan },
  journal={arXiv preprint arXiv:2506.14957},
  year={ 2025 }
}
Comments on this paper