ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.04906
  4. Cited By
Dynamic Neural Networks: A Survey

Dynamic Neural Networks: A Survey

9 February 2021
Yizeng Han
Gao Huang
Shiji Song
Le Yang
Honghui Wang
Yulin Wang
    3DH
    AI4TS
    AI4CE
ArXivPDFHTML

Papers citing "Dynamic Neural Networks: A Survey"

50 / 97 papers shown
Title
DPNet: Dynamic Pooling Network for Tiny Object Detection
DPNet: Dynamic Pooling Network for Tiny Object Detection
Luqi Gong
Haotian Chen
Y. Chen
Tianliang Yao
Chao Li
Shuai Zhao
Guangjie Han
ObjD
134
0
0
05 May 2025
DyDiT++: Dynamic Diffusion Transformers for Efficient Visual Generation
DyDiT++: Dynamic Diffusion Transformers for Efficient Visual Generation
Wangbo Zhao
Yizeng Han
Jiasheng Tang
Kai Wang
Hao Luo
Yibing Song
Gao Huang
Fan Wang
Yang You
69
0
0
09 Apr 2025
Adaptive Rank Allocation: Speeding Up Modern Transformers with RaNA Adapters
Adaptive Rank Allocation: Speeding Up Modern Transformers with RaNA Adapters
Roberto Garcia
Jerry Liu
Daniel Sorvisto
Sabri Eyuboglu
90
0
0
23 Mar 2025
Learning to Inference Adaptively for Multimodal Large Language Models
Learning to Inference Adaptively for Multimodal Large Language Models
Zhuoyan Xu
Khoi Duc Nguyen
Preeti Mukherjee
Saurabh Bagchi
Somali Chaterji
Yingyu Liang
Yin Li
LRM
44
1
0
13 Mar 2025
Fast and Accurate Gigapixel Pathological Image Classification with Hierarchical Distillation Multi-Instance Learning
Fast and Accurate Gigapixel Pathological Image Classification with Hierarchical Distillation Multi-Instance Learning
Jiuyang Dong
Junjun Jiang
Kui Jiang
Jiahan Li
Yongbing Zhang
40
0
0
28 Feb 2025
NeRFCom: Feature Transform Coding Meets Neural Radiance Field for Free-View 3D Scene Semantic Transmission
NeRFCom: Feature Transform Coding Meets Neural Radiance Field for Free-View 3D Scene Semantic Transmission
Weijie Yue
Zhongwei Si
Bolin Wu
Sixian Wang
Xiaoqi Qin
K. Niu
Jincheng Dai
Ping Zhang
61
0
0
27 Feb 2025
Ray-Tracing for Conditionally Activated Neural Networks
Ray-Tracing for Conditionally Activated Neural Networks
Claudio Gallicchio
Giuseppe Nuti
AI4CE
55
0
0
21 Feb 2025
Convolutional Neural Networks and Mixture of Experts for Intrusion Detection in 5G Networks and beyond
Convolutional Neural Networks and Mixture of Experts for Intrusion Detection in 5G Networks and beyond
Loukas Ilias
George Doukas
Vangelis Lamprou
Christos Ntanos
D. Askounis
MoE
77
1
0
04 Dec 2024
ViMoE: An Empirical Study of Designing Vision Mixture-of-Experts
ViMoE: An Empirical Study of Designing Vision Mixture-of-Experts
Xumeng Han
Longhui Wei
Zhiyang Dou
Zipeng Wang
Chenhui Qiang
Xin He
Yingfei Sun
Zhenjun Han
Qi Tian
MoE
39
3
0
21 Oct 2024
Router-Tuning: A Simple and Effective Approach for Enabling Dynamic-Depth in Transformers
Router-Tuning: A Simple and Effective Approach for Enabling Dynamic-Depth in Transformers
Shwai He
Tao Ge
Guoheng Sun
Bowei Tian
Xiaoyang Wang
Ang Li
MoE
46
1
0
17 Oct 2024
More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed Routing
More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed Routing
Sagi Shaier
Francisco Pereira
K. Wense
Lawrence E Hunter
Matt Jones
MoE
46
0
0
10 Oct 2024
Predicting Probabilities of Error to Combine Quantization and Early
  Exiting: QuEE
Predicting Probabilities of Error to Combine Quantization and Early Exiting: QuEE
Florence Regol
Joud Chataoui
Bertrand Charpentier
Mark J. Coates
Pablo Piantanida
Stephan Gunnemann
37
0
0
20 Jun 2024
A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training
A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training
Kai Wang
Yukun Zhou
Mingjia Shi
Zhihang Yuan
Yuzhang Shang
Yuzhang Shang
Hanwang Zhang
Hanwang Zhang
Yang You
63
10
0
27 May 2024
Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models
Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models
Yongxin Guo
Zhenglin Cheng
Xiaoying Tang
Tao R. Lin
Tao Lin
MoE
53
7
0
23 May 2024
Tiny Models are the Computational Saver for Large Models
Tiny Models are the Computational Saver for Large Models
Qingyuan Wang
B. Cardiff
Antoine Frappé
Benoît Larras
Deepu John
29
2
0
26 Mar 2024
D-Net: Dynamic Large Kernel with Dynamic Feature Fusion for Volumetric
  Medical Image Segmentation
D-Net: Dynamic Large Kernel with Dynamic Feature Fusion for Volumetric Medical Image Segmentation
Jin Yang
Peijie Qiu
Yichi Zhang
Daniel S. Marcus
Aristeidis Sotiras
MedIm
36
9
0
15 Mar 2024
EncodingNet: A Novel Encoding-based MAC Design for Efficient Neural
  Network Acceleration
EncodingNet: A Novel Encoding-based MAC Design for Efficient Neural Network Acceleration
Bo Liu
Grace Li Zhang
Xunzhao Yin
Ulf Schlichtmann
Bing Li
MQ
AI4CE
18
0
0
25 Feb 2024
Adaptive Inference: Theoretical Limits and Unexplored Opportunities
Adaptive Inference: Theoretical Limits and Unexplored Opportunities
S. Hor
Ying Qian
Mert Pilanci
Amin Arbabian
16
0
0
06 Feb 2024
DeSparsify: Adversarial Attack Against Token Sparsification Mechanisms
  in Vision Transformers
DeSparsify: Adversarial Attack Against Token Sparsification Mechanisms in Vision Transformers
Oryan Yehezkel
Alon Zolfi
Amit Baras
Yuval Elovici
A. Shabtai
AAML
27
0
0
04 Feb 2024
Investigating Recurrent Transformers with Dynamic Halt
Investigating Recurrent Transformers with Dynamic Halt
Jishnu Ray Chowdhury
Cornelia Caragea
39
1
0
01 Feb 2024
Adaptive Depth Networks with Skippable Sub-Paths
Adaptive Depth Networks with Skippable Sub-Paths
Woochul Kang
28
1
0
27 Dec 2023
Mask Grounding for Referring Image Segmentation
Mask Grounding for Referring Image Segmentation
Yong Xien Chng
Henry Zheng
Yizeng Han
Xuchong Qiu
Gao Huang
ISeg
ObjD
26
15
0
19 Dec 2023
EE-LLM: Large-Scale Training and Inference of Early-Exit Large Language
  Models with 3D Parallelism
EE-LLM: Large-Scale Training and Inference of Early-Exit Large Language Models with 3D Parallelism
Yanxi Chen
Xuchen Pan
Yaliang Li
Bolin Ding
Jingren Zhou
LRM
33
31
0
08 Dec 2023
Subnetwork-to-go: Elastic Neural Network with Dynamic Training and
  Customizable Inference
Subnetwork-to-go: Elastic Neural Network with Dynamic Training and Customizable Inference
Kai Li
Yi Luo
21
2
0
06 Dec 2023
REDS: Resource-Efficient Deep Subnetworks for Dynamic Resource
  Constraints
REDS: Resource-Efficient Deep Subnetworks for Dynamic Resource Constraints
Francesco Corti
Balz Maag
Joachim Schauer
U. Pferschy
O. Saukh
26
2
0
22 Nov 2023
PAUMER: Patch Pausing Transformer for Semantic Segmentation
PAUMER: Patch Pausing Transformer for Semantic Segmentation
Evann Courdier
Prabhu Teja Sivaprasad
F. Fleuret
31
2
0
01 Nov 2023
Dynamic nsNet2: Efficient Deep Noise Suppression with Early Exiting
Dynamic nsNet2: Efficient Deep Noise Suppression with Early Exiting
Riccardo Miccini
Alaa Zniber
Clément Laroche
Tobias Piechowiak
Martin Schoeberl
Luca Pezzarossa
Ouassim Karrakchou
J. Sparsø
Mounir Ghogho
25
1
0
31 Aug 2023
Long-Distance Gesture Recognition using Dynamic Neural Networks
Long-Distance Gesture Recognition using Dynamic Neural Networks
Shubhang Bhatnagar
S. Gopal
N. Ahuja
Liu Ren
26
3
0
09 Aug 2023
Uncertainty-Guided Spatial Pruning Architecture for Efficient Frame
  Interpolation
Uncertainty-Guided Spatial Pruning Architecture for Efficient Frame Interpolation
Ri Cheng
Xuhao Jiang
Ruian He
Shili Zhou
Weimin Tan
Bo Yan
41
2
0
31 Jul 2023
Learning to simulate partially known spatio-temporal dynamics with
  trainable difference operators
Learning to simulate partially known spatio-temporal dynamics with trainable difference operators
Xiang Huang
Zhuoyuan Li
Hongsheng Liu
Zidong Wang
Hongye Zhou
Bin Dong
Bei Hua
AI4TS
AI4CE
27
1
0
26 Jul 2023
Revisiting Stereo Triangulation in UAV Distance Estimation
Revisiting Stereo Triangulation in UAV Distance Estimation
Jiafan Zhuang
Duan Yuan
Rihong Yan
Weixin Huang
Yutao Zhou
Zhun Fan
32
0
0
15 Jun 2023
Lifting the Curse of Capacity Gap in Distilling Language Models
Lifting the Curse of Capacity Gap in Distilling Language Models
Chen Zhang
Yang Yang
Jiahao Liu
Jingang Wang
Yunsen Xian
Benyou Wang
Dawei Song
MoE
32
19
0
20 May 2023
Small-footprint slimmable networks for keyword spotting
Small-footprint slimmable networks for keyword spotting
Zuhaib Akhtar
Mohammad Omar Khursheed
Dongsu Du
Yuzong Liu
30
2
0
21 Apr 2023
Large-scale Dynamic Network Representation via Tensor Ring Decomposition
Large-scale Dynamic Network Representation via Tensor Ring Decomposition
Qu Wang
13
0
0
18 Apr 2023
Boosting Convolutional Neural Networks with Middle Spectrum Grouped
  Convolution
Boosting Convolutional Neural Networks with Middle Spectrum Grouped Convolution
Z. Su
Jiehua Zhang
Tianpeng Liu
Zhen Liu
Shuanghui Zhang
M. Pietikäinen
Li Liu
27
2
0
13 Apr 2023
DynamicDet: A Unified Dynamic Architecture for Object Detection
DynamicDet: A Unified Dynamic Architecture for Object Detection
Zhi-Hao Lin
Yongtao Wang
Jinhe Zhang
Xiaojie Chu
ObjD
23
30
0
12 Apr 2023
Memorization Capacity of Neural Networks with Conditional Computation
Memorization Capacity of Neural Networks with Conditional Computation
Erdem Koyuncu
30
4
0
20 Mar 2023
Adaptive Rotated Convolution for Rotated Object Detection
Adaptive Rotated Convolution for Rotated Object Detection
Yifan Pu
Yiru Wang
Zhuofan Xia
Yizeng Han
Yulin Wang
Weihao Gan
Zidong Wang
S. Song
Gao Huang
17
76
0
14 Mar 2023
I3D: Transformer architectures with input-dependent dynamic depth for
  speech recognition
I3D: Transformer architectures with input-dependent dynamic depth for speech recognition
Yifan Peng
Jaesong Lee
Shinji Watanabe
22
19
0
14 Mar 2023
A Dynamic Temporal Self-attention Graph Convolutional Network for
  Traffic Prediction
A Dynamic Temporal Self-attention Graph Convolutional Network for Traffic Prediction
Ruiyuan Jiang
Shangbo Wang
Yuli Zhang
AI4TS
13
0
0
21 Feb 2023
Fixing Overconfidence in Dynamic Neural Networks
Fixing Overconfidence in Dynamic Neural Networks
Lassi Meronen
Martin Trapp
Andrea Pilzer
Le Yang
Arno Solin
BDL
28
16
0
13 Feb 2023
Towards Inference Efficient Deep Ensemble Learning
Towards Inference Efficient Deep Ensemble Learning
Ziyue Li
Kan Ren
Yifan Yang
Xinyang Jiang
Yuqing Yang
Dongsheng Li
BDL
21
12
0
29 Jan 2023
Dynamic Grained Encoder for Vision Transformers
Dynamic Grained Encoder for Vision Transformers
Lin Song
Songyang Zhang
Songtao Liu
Zeming Li
Xuming He
Hongbin Sun
Jian-jun Sun
Nanning Zheng
ViT
26
34
0
10 Jan 2023
Vertical Layering of Quantized Neural Networks for Heterogeneous
  Inference
Vertical Layering of Quantized Neural Networks for Heterogeneous Inference
Hai Wu
Ruifei He
Hao Hao Tan
Xiaojuan Qi
Kaibin Huang
MQ
19
2
0
10 Dec 2022
Deep Incubation: Training Large Models by Divide-and-Conquering
Deep Incubation: Training Large Models by Divide-and-Conquering
Zanlin Ni
Yulin Wang
Jiangwei Yu
Haojun Jiang
Yu Cao
Gao Huang
VLM
18
11
0
08 Dec 2022
Vision Transformer Computation and Resilience for Dynamic Inference
Vision Transformer Computation and Resilience for Dynamic Inference
Kavya Sreedhar
Jason Clemons
Rangharajan Venkatesan
S. Keckler
M. Horowitz
24
2
0
06 Dec 2022
Fast Inference from Transformers via Speculative Decoding
Fast Inference from Transformers via Speculative Decoding
Yaniv Leviathan
Matan Kalman
Yossi Matias
LRM
44
618
0
30 Nov 2022
Boosted Dynamic Neural Networks
Boosted Dynamic Neural Networks
Haichao Yu
Haoxiang Li
G. Hua
Gao Huang
Humphrey Shi
30
7
0
30 Nov 2022
Latent Iterative Refinement for Modular Source Separation
Latent Iterative Refinement for Modular Source Separation
Dimitrios Bralios
Efthymios Tzinis
G. Wichern
Paris Smaragdis
Jonathan Le Roux
BDL
25
5
0
22 Nov 2022
Dynamic-Pix2Pix: Noise Injected cGAN for Modeling Input and Target
  Domain Joint Distributions with Limited Training Data
Dynamic-Pix2Pix: Noise Injected cGAN for Modeling Input and Target Domain Joint Distributions with Limited Training Data
Mohammadreza Naderi
N. Karimi
Ali Emami
S. Shirani
S. Samavi
18
0
0
15 Nov 2022
12
Next