ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.13494
  4. Cited By
Eventful Transformers: Leveraging Temporal Redundancy in Vision
  Transformers

Eventful Transformers: Leveraging Temporal Redundancy in Vision Transformers

25 August 2023
Matthew Dutson
Yin Li
M. Gupta
    ViT
ArXivPDFHTML

Papers citing "Eventful Transformers: Leveraging Temporal Redundancy in Vision Transformers"

14 / 14 papers shown
Title
Not Every Patch is Needed: Towards a More Efficient and Effective Backbone for Video-based Person Re-identification
Not Every Patch is Needed: Towards a More Efficient and Effective Backbone for Video-based Person Re-identification
Lanyun Zhu
T. Chen
Deyi Ji
Jieping Ye
J. Liu
39
2
0
28 Jan 2025
Learning K-U-Net with constant complexity: An Application to time series
  forecasting
Learning K-U-Net with constant complexity: An Application to time series forecasting
Jiang You
A. Çela
René Natowicz
Jacob Ouanounou
Patrick Siarry
AI4TS
23
0
0
03 Oct 2024
Energy-Efficient & Real-Time Computer Vision with Intelligent Skipping
  via Reconfigurable CMOS Image Sensors
Energy-Efficient & Real-Time Computer Vision with Intelligent Skipping via Reconfigurable CMOS Image Sensors
Md. Abdullah-Al Kaiser
Sreetama Sarkar
P. Beerel
Akhilesh R. Jaiswal
Gourav Datta
26
2
0
25 Sep 2024
MaskVD: Region Masking for Efficient Video Object Detection
MaskVD: Region Masking for Efficient Video Object Detection
Sreetama Sarkar
Gourav Datta
Souvik Kundu
Kai Zheng
Chirayata Bhattacharyya
P. Beerel
25
3
0
16 Jul 2024
Expedited Training of Visual Conditioned Language Generation via
  Redundancy Reduction
Expedited Training of Visual Conditioned Language Generation via Redundancy Reduction
Yiren Jian
Tingkai Liu
Yunzhe Tao
Chunhui Zhang
Soroush Vosoughi
HX Yang
VLM
17
7
0
05 Oct 2023
Token Pooling in Vision Transformers
Token Pooling in Vision Transformers
D. Marin
Jen-Hao Rick Chang
Anurag Ranjan
Anish K. Prabhu
Mohammad Rastegari
Oncel Tuzel
ViT
73
66
0
08 Oct 2021
Combiner: Full Attention Transformer with Sparse Computation Cost
Combiner: Full Attention Transformer with Sparse Computation Cost
Hongyu Ren
H. Dai
Zihang Dai
Mengjiao Yang
J. Leskovec
Dale Schuurmans
Bo Dai
73
77
0
12 Jul 2021
Skip-Convolutions for Efficient Video Processing
Skip-Convolutions for Efficient Video Processing
A. Habibian
Davide Abati
Taco S. Cohen
B. Bejnordi
54
50
0
23 Apr 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
277
3,622
0
24 Feb 2021
Is Space-Time Attention All You Need for Video Understanding?
Is Space-Time Attention All You Need for Video Understanding?
Gedas Bertasius
Heng Wang
Lorenzo Torresani
ViT
280
1,981
0
09 Feb 2021
Memory Enhanced Global-Local Aggregation for Video Object Detection
Memory Enhanced Global-Local Aggregation for Video Object Detection
Yihong Chen
Yue Cao
Han Hu
Liwei Wang
112
261
0
26 Mar 2020
Efficient Content-Based Sparse Attention with Routing Transformers
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
243
579
0
12 Mar 2020
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,561
0
17 Apr 2017
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
296
39,194
0
01 Sep 2014
1