ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.09431
  4. Cited By
FactoFormer: Factorized Hyperspectral Transformers with Self-Supervised
  Pretraining

FactoFormer: Factorized Hyperspectral Transformers with Self-Supervised Pretraining

18 September 2023
Shaheera Mohamed
Maryam Haghighat
Tharindu Fernando
S. Sridharan
Clinton Fookes
Peyman Moghadam
    ViT
ArXivPDFHTML

Papers citing "FactoFormer: Factorized Hyperspectral Transformers with Self-Supervised Pretraining"

5 / 5 papers shown
Title
Spatioformer: A Geo-encoded Transformer for Large-Scale Plant Species Richness Prediction
Spatioformer: A Geo-encoded Transformer for Large-Scale Plant Species Richness Prediction
Yiqing Guo
K. Mokany
S. Levick
Jinyan Yang
P. Moghadam
MDE
44
2
0
25 Oct 2024
Pre-training with Random Orthogonal Projection Image Modeling
Pre-training with Random Orthogonal Projection Image Modeling
Maryam Haghighat
Peyman Moghadam
Shaheer Mohamed
Piotr Koniusz
VLM
28
8
0
28 Oct 2023
Plant species richness prediction from DESIS hyperspectral data: A comparison study on feature extraction procedures and regression models
Plant species richness prediction from DESIS hyperspectral data: A comparison study on feature extraction procedures and regression models
Yiqing Guo
K. Mokany
C. Ong
P. Moghadam
S. Ferrier
S. Levick
16
17
0
05 Jan 2023
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
305
7,443
0
11 Nov 2021
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
317
5,775
0
29 Apr 2021
1