ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.12818
  4. Cited By
Pushing the Efficiency Limit Using Structured Sparse Convolutions

Pushing the Efficiency Limit Using Structured Sparse Convolutions

23 October 2022
Vinay Kumar Verma
Nikhil Mehta
Shijing Si
Ricardo Henao
Lawrence Carin
ArXivPDFHTML

Papers citing "Pushing the Efficiency Limit Using Structured Sparse Convolutions"

2 / 2 papers shown
Title
Magnitude Attention-based Dynamic Pruning
Magnitude Attention-based Dynamic Pruning
Jihye Back
Namhyuk Ahn
Jang-Hyun Kim
35
2
0
08 Jun 2023
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Zhuowen Tu
Kaiming He
306
10,233
0
16 Nov 2016
1