VAMamba: An Efficient Visual Adaptive Mamba for Image Restoration
- Mamba

Recent Mamba-based image restoration methods have achieved promising results but remainlimited by fixed scanning patterns and inefficient feature utilization. Conventional Mambaarchitectures rely on predetermined paths that cannot adapt to diverse degradations, constrainingboth restoration performance and computational efficiency. To overcome these limitations, wepropose VAMamba, a Visual Adaptive Mamba framework with two key innovations. First,QCLAM(Queue-basedCacheLow-rankAdaptiveMemory)enhancesfeaturelearningthroughaFIFO cache that stores historical representations. Similarity between current LoRA-adapted andcached features guides intelligent fusion, enabling dynamic reuse while effectively controllingthis http URL, GPS-SS2D(GreedyPathScanSS2D)introducesadaptive scanning. AVision Transformer generates score maps to estimate pixel importance, and a greedy strategy de termines optimal forward and backward scanning paths. These learned trajectories replace rigidpatterns, enabling SS2D to perform targeted feature extraction. The integration of QCLAM andGPS-SS2D allows VAMamba to adaptively focus on degraded regions while maintaining highcomputational efficiency. Extensive experiments across diverse restoration tasks demonstratethat VAMamba consistently outperforms existing approaches in both restoration quality andefficiency, establishing new benchmarks for adaptive image restoration. Our code is availableatthis https URL.
View on arXiv