7
0

Applying Vision Transformers on Spectral Analysis of Astronomical Objects

Abstract

We apply pre-trained Vision Transformers (ViTs), originally developed for image recognition, to the analysis of astronomical spectral data. By converting traditional one-dimensional spectra into two-dimensional image representations, we enable ViTs to capture both local and global spectral features through spatial self-attention. We fine-tune a ViT pretrained on ImageNet using millions of spectra from the SDSS and LAMOST surveys, represented as spectral plots. Our model is evaluated on key tasks including stellar object classification and redshift (zz) estimation, where it demonstrates strong performance and scalability. We achieve classification accuracy higher than Support Vector Machines and Random Forests, and attain R2R^2 values comparable to AstroCLIP's spectrum encoder, even when generalizing across diverse object types. These results demonstrate the effectiveness of using pretrained vision models for spectroscopic data analysis. To our knowledge, this is the first application of ViTs to large-scale, which also leverages real spectroscopic data and does not rely on synthetic inputs.

View on arXiv
@article{moraes2025_2506.00294,
  title={ Applying Vision Transformers on Spectral Analysis of Astronomical Objects },
  author={ Luis Felipe Strano Moraes and Ignacio Becker and Pavlos Protopapas and Guillermo Cabrera-Vives },
  journal={arXiv preprint arXiv:2506.00294},
  year={ 2025 }
}
Comments on this paper