ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1405.3866
  4. Cited By
Speeding up Convolutional Neural Networks with Low Rank Expansions

Speeding up Convolutional Neural Networks with Low Rank Expansions

15 May 2014
Max Jaderberg
Andrea Vedaldi
Andrew Zisserman
ArXivPDFHTML

Papers citing "Speeding up Convolutional Neural Networks with Low Rank Expansions"

50 / 236 papers shown
Title
BackSlash: Rate Constrained Optimized Training of Large Language Models
BackSlash: Rate Constrained Optimized Training of Large Language Models
Jun Wu
Jiangtao Wen
Yuxing Han
39
0
0
23 Apr 2025
A Priori Generalizability Estimate for a CNN
A Priori Generalizability Estimate for a CNN
Cito Balsells
Beatrice Riviere
David T. Fuentes
40
0
0
24 Feb 2025
CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation
CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation
Z. Liu
Ruijie Zhang
Zihan Wang
Zi Yang
Paul Hovland
Bogdan Nicolae
Franck Cappello
Z. Zhang
49
0
0
16 Feb 2025
A Hardware-Efficient Photonic Tensor Core: Accelerating Deep Neural Networks with Structured Compression
A Hardware-Efficient Photonic Tensor Core: Accelerating Deep Neural Networks with Structured Compression
Shupeng Ning
Hanqing Zhu
Chenghao Feng
Jiaqi Gu
David Z. Pan
Ray T. Chen
42
0
0
01 Feb 2025
Implicit Bias in Matrix Factorization and its Explicit Realization in a New Architecture
Yikun Hou
Suvrit Sra
A. Yurtsever
34
0
0
28 Jan 2025
Task Singular Vectors: Reducing Task Interference in Model Merging
Task Singular Vectors: Reducing Task Interference in Model Merging
Antonio Andrea Gargiulo
Donato Crisostomi
Maria Sofia Bucarelli
Simone Scardapane
Fabrizio Silvestri
Emanuele Rodolà
MoMe
87
9
0
26 Nov 2024
Collaborative and Efficient Personalization with Mixtures of Adaptors
Collaborative and Efficient Personalization with Mixtures of Adaptors
Abdulla Jasem Almansoori
Samuel Horváth
Martin Takáč
FedML
46
2
0
04 Oct 2024
Efficient Source-Free Time-Series Adaptation via Parameter Subspace Disentanglement
Efficient Source-Free Time-Series Adaptation via Parameter Subspace Disentanglement
Gaurav Patel
Christopher Sandino
Behrooz Mahasseni
Ellen L. Zippi
Erdrin Azemi
Ali Moin
Juri Minxha
TTA
AI4TS
50
3
0
03 Oct 2024
Two Sparse Matrices are Better than One: Sparsifying Neural Networks
  with Double Sparse Factorization
Two Sparse Matrices are Better than One: Sparsifying Neural Networks with Double Sparse Factorization
Vladimír Boža
Vladimír Macko
30
1
0
27 Sep 2024
Compressible Dynamics in Deep Overparameterized Low-Rank Learning &
  Adaptation
Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation
Can Yaras
Peng Wang
Laura Balzano
Qing Qu
AI4CE
37
13
0
06 Jun 2024
Efficient and accurate neural field reconstruction using resistive
  memory
Efficient and accurate neural field reconstruction using resistive memory
Yifei Yu
Shaocong Wang
Woyu Zhang
Xinyuan Zhang
Xiuzhe Wu
...
Zhongrui Wang
Dashan Shang
Qi Liu
Kwang-Ting Cheng
Ming-Yu Liu
39
0
0
15 Apr 2024
Faster and Lighter LLMs: A Survey on Current Challenges and Way Forward
Faster and Lighter LLMs: A Survey on Current Challenges and Way Forward
Arnav Chavan
Raghav Magazine
Shubham Kushwaha
M. Debbah
Deepak Gupta
16
18
0
02 Feb 2024
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Mike Heddes
Narayan Srinivasa
T. Givargis
Alexandru Nicolau
91
0
0
12 Jan 2024
ARBiBench: Benchmarking Adversarial Robustness of Binarized Neural
  Networks
ARBiBench: Benchmarking Adversarial Robustness of Binarized Neural Networks
Peng Zhao
Jiehua Zhang
Bowen Peng
Longguang Wang
Yingmei Wei
Yu Liu
Li Liu
AAML
29
0
0
21 Dec 2023
All Rivers Run to the Sea: Private Learning with Asymmetric Flows
All Rivers Run to the Sea: Private Learning with Asymmetric Flows
Yue Niu
Ramy E. Ali
Saurav Prakash
Salman Avestimehr
FedML
28
2
0
05 Dec 2023
REDS: Resource-Efficient Deep Subnetworks for Dynamic Resource
  Constraints
REDS: Resource-Efficient Deep Subnetworks for Dynamic Resource Constraints
Francesco Corti
Balz Maag
Joachim Schauer
U. Pferschy
O. Saukh
34
2
0
22 Nov 2023
RepQ: Generalizing Quantization-Aware Training for Re-Parametrized
  Architectures
RepQ: Generalizing Quantization-Aware Training for Re-Parametrized Architectures
Anastasiia Prutianova
Alexey Zaytsev
Chung-Kuei Lee
Fengyu Sun
Ivan Koryakovskiy
MQ
13
0
0
09 Nov 2023
Matrix Compression via Randomized Low Rank and Low Precision
  Factorization
Matrix Compression via Randomized Low Rank and Low Precision Factorization
R. Saha
Varun Srivastava
Mert Pilanci
26
19
0
17 Oct 2023
Hypernetwork-based Meta-Learning for Low-Rank Physics-Informed Neural
  Networks
Hypernetwork-based Meta-Learning for Low-Rank Physics-Informed Neural Networks
Woojin Cho
Kookjin Lee
Donsub Rim
Noseong Park
AI4CE
PINN
37
16
0
14 Oct 2023
Maestro: Uncovering Low-Rank Structures via Trainable Decomposition
Maestro: Uncovering Low-Rank Structures via Trainable Decomposition
Samuel Horváth
Stefanos Laskaridis
Shashank Rajput
Hongyi Wang
BDL
32
4
0
28 Aug 2023
Survey on Computer Vision Techniques for Internet-of-Things Devices
Survey on Computer Vision Techniques for Internet-of-Things Devices
Ishmeet Kaur
Adwaita Janardhan Jadhav
AI4CE
27
1
0
02 Aug 2023
Compact Real-time Radiance Fields with Neural Codebook
Compact Real-time Radiance Fields with Neural Codebook
Lingzhi Li
Zhongshu Wang
Zhen Shen
Li Shen
Ping Tan
26
4
0
29 May 2023
Cuttlefish: Low-Rank Model Training without All the Tuning
Cuttlefish: Low-Rank Model Training without All the Tuning
Hongyi Wang
Saurabh Agarwal
Pongsakorn U-chupala
Yoshiki Tanaka
Eric P. Xing
Dimitris Papailiopoulos
OffRL
56
22
0
04 May 2023
BiBench: Benchmarking and Analyzing Network Binarization
BiBench: Benchmarking and Analyzing Network Binarization
Haotong Qin
Mingyuan Zhang
Yifu Ding
Aoyu Li
Zhongang Cai
Ziwei Liu
Feng Yu
Xianglong Liu
MQ
AAML
34
36
0
26 Jan 2023
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of
  Quantized CNNs
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs
A. M. Ribeiro-dos-Santos
João Dinis Ferreira
O. Mutlu
G. Falcão
MQ
21
1
0
15 Jan 2023
FSCNN: A Fast Sparse Convolution Neural Network Inference System
FSCNN: A Fast Sparse Convolution Neural Network Inference System
Bo Ji
Tianyi Chen
26
3
0
17 Dec 2022
GhostNetV2: Enhance Cheap Operation with Long-Range Attention
GhostNetV2: Enhance Cheap Operation with Long-Range Attention
Yehui Tang
Kai Han
Jianyuan Guo
Chang Xu
Chaoting Xu
Yunhe Wang
20
270
0
23 Nov 2022
Compressing Transformer-based self-supervised models for speech
  processing
Compressing Transformer-based self-supervised models for speech processing
Tzu-Quan Lin
Tsung-Huan Yang
Chun-Yao Chang
Kuang-Ming Chen
Tzu-hsun Feng
Hung-yi Lee
Hao Tang
40
6
0
17 Nov 2022
Pruning Very Deep Neural Network Channels for Efficient Inference
Pruning Very Deep Neural Network Channels for Efficient Inference
Yihui He
35
1
0
14 Nov 2022
Efficient Spatially Sparse Inference for Conditional GANs and Diffusion
  Models
Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models
Muyang Li
Ji Lin
Chenlin Meng
Stefano Ermon
Song Han
Jun-Yan Zhu
DiffM
40
45
0
03 Nov 2022
Fast and Low-Memory Deep Neural Networks Using Binary Matrix
  Factorization
Fast and Low-Memory Deep Neural Networks Using Binary Matrix Factorization
Alireza Bordbar
M. Kahaei
MQ
33
0
0
24 Oct 2022
Approximating Continuous Convolutions for Deep Network Compression
Approximating Continuous Convolutions for Deep Network Compression
Theo W. Costain
V. Prisacariu
36
0
0
17 Oct 2022
Seeking Interpretability and Explainability in Binary Activated Neural
  Networks
Seeking Interpretability and Explainability in Binary Activated Neural Networks
Benjamin Leblanc
Pascal Germain
FAtt
40
1
0
07 Sep 2022
Semantic Self-adaptation: Enhancing Generalization with a Single Sample
Semantic Self-adaptation: Enhancing Generalization with a Single Sample
Sherwin Bahmani
Oliver Hahn
Eduard Zamfir
Nikita Araslanov
Daniel Cremers
Stefan Roth
OOD
TTA
VLM
40
6
0
10 Aug 2022
Augmented Bilinear Network for Incremental Multi-Stock Time-Series
  Classification
Augmented Bilinear Network for Incremental Multi-Stock Time-Series Classification
M. Shabani
D. Tran
Juho Kanniainen
Alexandros Iosifidis
OOD
AIFin
32
6
0
23 Jul 2022
On Efficient Real-Time Semantic Segmentation: A Survey
On Efficient Real-Time Semantic Segmentation: A Survey
Christopher J. Holder
Muhammad Shafique
SSeg
21
20
0
17 Jun 2022
Fault-Tolerant Collaborative Inference through the Edge-PRUNE Framework
Fault-Tolerant Collaborative Inference through the Edge-PRUNE Framework
Jani Boutellier
Bo Tan
J. Nurmi
24
2
0
16 Jun 2022
Canonical convolutional neural networks
Canonical convolutional neural networks
Lokesh Veeramacheneni
Moritz Wolter
Reinhard Klein
Jochen Garcke
18
3
0
03 Jun 2022
Gator: Customizable Channel Pruning of Neural Networks with Gating
Gator: Customizable Channel Pruning of Neural Networks with Gating
E. Passov
E. David
N. Netanyahu
AAML
42
0
0
30 May 2022
Revisiting Random Channel Pruning for Neural Network Compression
Revisiting Random Channel Pruning for Neural Network Compression
Yawei Li
Kamil Adamczewski
Wen Li
Shuhang Gu
Radu Timofte
Luc Van Gool
24
81
0
11 May 2022
NTIRE 2022 Challenge on Efficient Super-Resolution: Methods and Results
NTIRE 2022 Challenge on Efficient Super-Resolution: Methods and Results
Yawei Li
Kaicheng Zhang
Radu Timofte
Luc Van Gool
F. Kong
...
Deng-Guang Zhou
Kun Zeng
Han-Yuan Lin
Xinyu Chen
Jin-Tao Fang
SupR
36
77
0
11 May 2022
Statistical Guarantees for Approximate Stationary Points of Simple
  Neural Networks
Statistical Guarantees for Approximate Stationary Points of Simple Neural Networks
Mahsa Taheri
Fang Xie
Johannes Lederer
29
0
0
09 May 2022
Edge-PRUNE: Flexible Distributed Deep Learning Inference
Edge-PRUNE: Flexible Distributed Deep Learning Inference
Jani Boutellier
Bo Tan
J. Nurmi
13
2
0
27 Apr 2022
VisCUIT: Visual Auditor for Bias in CNN Image Classifier
VisCUIT: Visual Auditor for Bias in CNN Image Classifier
Seongmin Lee
Zijie J. Wang
Judy Hoffman
Duen Horng Chau
24
11
0
12 Apr 2022
Compact Model Training by Low-Rank Projection with Energy Transfer
Compact Model Training by Low-Rank Projection with Energy Transfer
K. Guo
Zhenquan Lin
Xiaofen Xing
Fang Liu
Xiangmin Xu
32
2
0
12 Apr 2022
ConceptExplainer: Interactive Explanation for Deep Neural Networks from
  a Concept Perspective
ConceptExplainer: Interactive Explanation for Deep Neural Networks from a Concept Perspective
Jinbin Huang
Aditi Mishra
Bum Chul Kwon
Chris Bryan
FAtt
HAI
43
31
0
04 Apr 2022
Learning Compressed Embeddings for On-Device Inference
Learning Compressed Embeddings for On-Device Inference
Niketan Pansare
J. Katukuri
Aditya Arora
F. Cipollone
R. Shaik
Noyan Tokgozoglu
Chandru Venkataraman
37
14
0
18 Mar 2022
DNN Training Acceleration via Exploring GPGPU Friendly Sparsity
DNN Training Acceleration via Exploring GPGPU Friendly Sparsity
Zhuoran Song
Yihong Xu
Han Li
Naifeng Jing
Xiaoyao Liang
Li Jiang
29
3
0
11 Mar 2022
projUNN: efficient method for training deep networks with unitary
  matrices
projUNN: efficient method for training deep networks with unitary matrices
B. Kiani
Randall Balestriero
Yann LeCun
S. Lloyd
43
32
0
10 Mar 2022
Compressing CNN Kernels for Videos Using Tucker Decompositions: Towards
  Lightweight CNN Applications
Compressing CNN Kernels for Videos Using Tucker Decompositions: Towards Lightweight CNN Applications
Tobias Engelhardt Rasmussen
Line H. Clemmensen
Andreas Baum
27
4
0
10 Mar 2022
12345
Next