ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.02887
  4. Cited By
Time-, Memory- and Parameter-Efficient Visual Adaptation

Time-, Memory- and Parameter-Efficient Visual Adaptation

5 February 2024
Otniel-Bogdan Mercea
Alexey Gritsenko
Cordelia Schmid
Anurag Arnab
    VLM
ArXivPDFHTML

Papers citing "Time-, Memory- and Parameter-Efficient Visual Adaptation"

35 / 35 papers shown
Title
EDTformer: An Efficient Decoder Transformer for Visual Place Recognition
EDTformer: An Efficient Decoder Transformer for Visual Place Recognition
Tong Jin
Feng Lu
Shuyu Hu
Chun Yuan
Yunpeng Liu
ViT
112
0
0
01 Dec 2024
MambaPEFT: Exploring Parameter-Efficient Fine-Tuning for Mamba
MambaPEFT: Exploring Parameter-Efficient Fine-Tuning for Mamba
Masakazu Yoshimura
Teruaki Hayashi
Yota Maeda
Mamba
244
2
0
06 Nov 2024
Hierarchical Side-Tuning for Vision Transformers
Hierarchical Side-Tuning for Vision Transformers
Weifeng Lin
Ziheng Wu
Wentao Yang
Mingxin Huang
Jun Huang
Lianwen Jin
46
8
0
09 Oct 2023
Benchmarking Neural Network Training Algorithms
Benchmarking Neural Network Training Algorithms
George E. Dahl
Frank Schneider
Zachary Nado
Naman Agarwal
Chandramouli Shama Sastry
...
Chris J. Maddison
R. Vasudev
Michal Badura
Ankush Garg
Peter Mattson
60
33
0
12 Jun 2023
TOAST: Transfer Learning via Attention Steering
TOAST: Transfer Learning via Attention Steering
Baifeng Shi
Siyu Gai
Trevor Darrell
Xin Wang
57
11
0
24 May 2023
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
Vladislav Lialin
Vijeta Deshpande
Anna Rumshisky
98
175
0
28 Mar 2023
Towards Efficient Visual Adaption via Structural Re-parameterization
Towards Efficient Visual Adaption via Structural Re-parameterization
Gen Luo
Minglang Huang
Yiyi Zhou
Xiaoshuai Sun
Guannan Jiang
Zhiyu Wang
Rongrong Ji
VLM
VPVLM
45
80
0
16 Feb 2023
FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer
FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer
Shibo Jie
Zhi-Hong Deng
49
131
0
06 Dec 2022
Scaling & Shifting Your Features: A New Baseline for Efficient Model
  Tuning
Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning
Dongze Lian
Daquan Zhou
Jiashi Feng
Xinchao Wang
74
258
0
17 Oct 2022
PaLI: A Jointly-Scaled Multilingual Language-Image Model
PaLI: A Jointly-Scaled Multilingual Language-Image Model
Xi Chen
Tianlin Li
Soravit Changpinyo
A. Piergiovanni
Piotr Padlewski
...
Andreas Steiner
A. Angelova
Xiaohua Zhai
N. Houlsby
Radu Soricut
MLLM
VLM
85
720
0
14 Sep 2022
Convolutional Bypasses Are Better Vision Transformer Adapters
Convolutional Bypasses Are Better Vision Transformer Adapters
Shibo Jie
Zhi-Hong Deng
VPVLM
59
133
0
14 Jul 2022
ST-Adapter: Parameter-Efficient Image-to-Video Transfer Learning
ST-Adapter: Parameter-Efficient Image-to-Video Transfer Learning
Junting Pan
Ziyi Lin
Xiatian Zhu
Jing Shao
Hongsheng Li
82
203
0
27 Jun 2022
LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer
  Learning
LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning
Yi-Lin Sung
Jaemin Cho
Joey Tianyi Zhou
VLM
83
242
0
13 Jun 2022
Neural Prompt Search
Neural Prompt Search
Yuanhan Zhang
Kaiyang Zhou
Ziwei Liu
VPVLM
VLM
93
149
0
09 Jun 2022
Flamingo: a Visual Language Model for Few-Shot Learning
Flamingo: a Visual Language Model for Few-Shot Learning
Jean-Baptiste Alayrac
Jeff Donahue
Pauline Luc
Antoine Miech
Iain Barr
...
Mikolaj Binkowski
Ricardo Barreira
Oriol Vinyals
Andrew Zisserman
Karen Simonyan
MLLM
VLM
374
3,535
0
29 Apr 2022
Visual Prompt Tuning
Visual Prompt Tuning
Menglin Jia
Luming Tang
Bor-Chun Chen
Claire Cardie
Serge Belongie
Bharath Hariharan
Ser-Nam Lim
VLM
VPVLM
148
1,624
0
23 Mar 2022
Three things everyone should know about Vision Transformers
Three things everyone should know about Vision Transformers
Hugo Touvron
Matthieu Cord
Alaaeldin El-Nouby
Jakob Verbeek
Hervé Jégou
ViT
91
123
0
18 Mar 2022
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for
  Pre-trained Language Models
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Ning Ding
Yujia Qin
Guang Yang
Fu Wei
Zonghan Yang
...
Jianfei Chen
Yang Liu
Jie Tang
Juan Li
Maosong Sun
64
201
0
14 Mar 2022
$\mathcal{Y}$-Tuning: An Efficient Tuning Paradigm for Large-Scale
  Pre-Trained Models via Label Representation Learning
Y\mathcal{Y}Y-Tuning: An Efficient Tuning Paradigm for Large-Scale Pre-Trained Models via Label Representation Learning
Yitao Liu
Chen An
Xipeng Qiu
69
17
0
20 Feb 2022
The Efficiency Misnomer
The Efficiency Misnomer
Daoyuan Chen
Liuyi Yao
Dawei Gao
Ashish Vaswani
Yaliang Li
91
103
0
25 Oct 2021
UniPELT: A Unified Framework for Parameter-Efficient Language Model
  Tuning
UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning
Yuning Mao
Lambert Mathias
Rui Hou
Amjad Almahairi
Hao Ma
Jiawei Han
Wen-tau Yih
Madian Khabsa
58
190
0
14 Oct 2021
Towards a Unified View of Parameter-Efficient Transfer Learning
Towards a Unified View of Parameter-Efficient Transfer Learning
Junxian He
Chunting Zhou
Xuezhe Ma
Taylor Berg-Kirkpatrick
Graham Neubig
AAML
129
935
0
08 Oct 2021
How to train your ViT? Data, Augmentation, and Regularization in Vision
  Transformers
How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
Andreas Steiner
Alexander Kolesnikov
Xiaohua Zhai
Ross Wightman
Jakob Uszkoreit
Lucas Beyer
ViT
107
632
0
18 Jun 2021
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based
  Masked Language-models
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models
Elad Ben-Zaken
Shauli Ravfogel
Yoav Goldberg
162
1,218
0
18 Jun 2021
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Rabeeh Karimi Mahabadi
James Henderson
Sebastian Ruder
MoE
100
485
0
08 Jun 2021
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
415
2,673
0
04 May 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
552
4,036
0
18 Apr 2021
ViViT: A Video Vision Transformer
ViViT: A Video Vision Transformer
Anurag Arnab
Mostafa Dehghani
G. Heigold
Chen Sun
Mario Lucic
Cordelia Schmid
ViT
215
2,149
0
29 Mar 2021
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Xiang Lisa Li
Percy Liang
229
4,254
0
01 Jan 2021
The Hardware Lottery
The Hardware Lottery
Sara Hooker
68
211
0
14 Sep 2020
AdapterFusion: Non-Destructive Task Composition for Transfer Learning
AdapterFusion: Non-Destructive Task Composition for Transfer Learning
Jonas Pfeiffer
Aishwarya Kamath
Andreas Rucklé
Kyunghyun Cho
Iryna Gurevych
CLL
MoMe
129
849
0
01 May 2020
A Large-scale Study of Representation Learning with the Visual Task
  Adaptation Benchmark
A Large-scale Study of Representation Learning with the Visual Task Adaptation Benchmark
Xiaohua Zhai
J. Puigcerver
Alexander Kolesnikov
P. Ruyssen
C. Riquelme
...
Michael Tschannen
Marcin Michalski
Olivier Bousquet
Sylvain Gelly
N. Houlsby
SSL
79
439
0
01 Oct 2019
Revisiting Unreasonable Effectiveness of Data in Deep Learning Era
Revisiting Unreasonable Effectiveness of Data in Deep Learning Era
Chen Sun
Abhinav Shrivastava
Saurabh Singh
Abhinav Gupta
VLM
185
2,398
0
10 Jul 2017
Learning multiple visual domains with residual adapters
Learning multiple visual domains with residual adapters
Sylvestre-Alvise Rebuffi
Hakan Bilen
Andrea Vedaldi
OOD
160
933
0
22 May 2017
Gaussian Error Linear Units (GELUs)
Gaussian Error Linear Units (GELUs)
Dan Hendrycks
Kevin Gimpel
169
5,000
0
27 Jun 2016
1