ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.02302
  4. Cited By
Analysis and Design of Convolutional Networks via Hierarchical Tensor
  Decompositions

Analysis and Design of Convolutional Networks via Hierarchical Tensor Decompositions

5 May 2017
Nadav Cohen
Or Sharir
Yoav Levine
Ronen Tamari
David Yakira
Amnon Shashua
ArXivPDFHTML

Papers citing "Analysis and Design of Convolutional Networks via Hierarchical Tensor Decompositions"

27 / 27 papers shown
Title
Reduced Order Models and Conditional Expectation -- Analysing Parametric Low-Order Approximations
Reduced Order Models and Conditional Expectation -- Analysing Parametric Low-Order Approximations
Hermann G. Matthies
42
0
0
17 Feb 2025
Lecture Notes on Linear Neural Networks: A Tale of Optimization and
  Generalization in Deep Learning
Lecture Notes on Linear Neural Networks: A Tale of Optimization and Generalization in Deep Learning
Nadav Cohen
Noam Razin
35
0
0
25 Aug 2024
Incrementally-Computable Neural Networks: Efficient Inference for
  Dynamic Inputs
Incrementally-Computable Neural Networks: Efficient Inference for Dynamic Inputs
Or Sharir
Anima Anandkumar
32
0
0
27 Jul 2023
What Makes Data Suitable for a Locally Connected Neural Network? A
  Necessary and Sufficient Condition Based on Quantum Entanglement
What Makes Data Suitable for a Locally Connected Neural Network? A Necessary and Sufficient Condition Based on Quantum Entanglement
Yotam Alexander
Nimrod De La Vega
Noam Razin
Nadav Cohen
26
4
0
20 Mar 2023
On the Ability of Graph Neural Networks to Model Interactions Between
  Vertices
On the Ability of Graph Neural Networks to Model Interactions Between Vertices
Noam Razin
Tom Verbin
Nadav Cohen
23
10
0
29 Nov 2022
Transformer Vs. MLP-Mixer: Exponential Expressive Gap For NLP Problems
Transformer Vs. MLP-Mixer: Exponential Expressive Gap For NLP Problems
D. Navon
A. Bronstein
MoE
38
0
0
17 Aug 2022
STD-NET: Search of Image Steganalytic Deep-learning Architecture via
  Hierarchical Tensor Decomposition
STD-NET: Search of Image Steganalytic Deep-learning Architecture via Hierarchical Tensor Decomposition
Shunquan Tan
Qiushi Li
Laiyuan Li
Bin Li
Jiwu Huang
19
4
0
12 Jun 2022
Implicit Regularization in Hierarchical Tensor Factorization and Deep
  Convolutional Neural Networks
Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks
Noam Razin
Asaf Maman
Nadav Cohen
46
29
0
27 Jan 2022
The Inductive Bias of In-Context Learning: Rethinking Pretraining
  Example Design
The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design
Yoav Levine
Noam Wies
Daniel Jannai
D. Navon
Yedid Hoshen
Amnon Shashua
AI4CE
35
36
0
09 Oct 2021
Learning a Self-Expressive Network for Subspace Clustering
Learning a Self-Expressive Network for Subspace Clustering
Shangzhi Zhang
Chong You
René Vidal
Chun-Guang Li
29
44
0
08 Oct 2021
Which transformer architecture fits my data? A vocabulary bottleneck in
  self-attention
Which transformer architecture fits my data? A vocabulary bottleneck in self-attention
Noam Wies
Yoav Levine
Daniel Jannai
Amnon Shashua
40
20
0
09 May 2021
Towards Extremely Compact RNNs for Video Recognition with Fully
  Decomposed Hierarchical Tucker Structure
Towards Extremely Compact RNNs for Video Recognition with Fully Decomposed Hierarchical Tucker Structure
Miao Yin
Siyu Liao
Xiao-Yang Liu
Xiaodong Wang
Bo Yuan
AI4TS
19
31
0
12 Apr 2021
Implicit Regularization in Tensor Factorization
Implicit Regularization in Tensor Factorization
Noam Razin
Asaf Maman
Nadav Cohen
30
48
0
19 Feb 2021
Computational Separation Between Convolutional and Fully-Connected
  Networks
Computational Separation Between Convolutional and Fully-Connected Networks
Eran Malach
Shai Shalev-Shwartz
19
26
0
03 Oct 2020
Complexity for deep neural networks and other characteristics of deep
  feature representations
Complexity for deep neural networks and other characteristics of deep feature representations
R. Janik
Przemek Witaszczyk
7
5
0
08 Jun 2020
Implicit Regularization in Deep Learning May Not Be Explainable by Norms
Implicit Regularization in Deep Learning May Not Be Explainable by Norms
Noam Razin
Nadav Cohen
24
155
0
13 May 2020
Compressing Recurrent Neural Networks Using Hierarchical Tucker Tensor
  Decomposition
Compressing Recurrent Neural Networks Using Hierarchical Tucker Tensor Decomposition
Miao Yin
Siyu Liao
Xiao-Yang Liu
Xiaodong Wang
Bo Yuan
40
24
0
09 May 2020
Expressive power of tensor-network factorizations for probabilistic
  modeling, with applications from hidden Markov models to quantum machine
  learning
Expressive power of tensor-network factorizations for probabilistic modeling, with applications from hidden Markov models to quantum machine learning
I. Glasser
R. Sweke
Nicola Pancotti
Jens Eisert
J. I. Cirac
20
123
0
08 Jul 2019
Number-State Preserving Tensor Networks as Classifiers for Supervised
  Learning
Number-State Preserving Tensor Networks as Classifiers for Supervised Learning
G. Evenbly
17
11
0
15 May 2019
Shortcut Matrix Product States and its applications
Shortcut Matrix Product States and its applications
Zhuan Li
Pan Zhang
11
9
0
13 Dec 2018
Deep Compression of Sum-Product Networks on Tensor Networks
Deep Compression of Sum-Product Networks on Tensor Networks
Ching-Yun Ko
Cong Chen
Yuke Zhang
Kim Batselier
Ngai Wong
TPM
20
2
0
09 Nov 2018
From probabilistic graphical models to generalized tensor networks for
  supervised learning
From probabilistic graphical models to generalized tensor networks for supervised learning
I. Glasser
Nicola Pancotti
J. I. Cirac
AI4CE
69
74
0
15 Jun 2018
Interpreting Deep Learning: The Machine Learning Rorschach Test?
Interpreting Deep Learning: The Machine Learning Rorschach Test?
Adam S. Charles
AAML
HAI
AI4CE
19
9
0
01 Jun 2018
Universal approximations of invariant maps by neural networks
Universal approximations of invariant maps by neural networks
Dmitry Yarotsky
32
205
0
26 Apr 2018
On the Optimization of Deep Networks: Implicit Acceleration by
  Overparameterization
On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization
Sanjeev Arora
Nadav Cohen
Elad Hazan
11
456
0
19 Feb 2018
Information Scaling Law of Deep Neural Networks
Information Scaling Law of Deep Neural Networks
Xiao-Yang Liu
28
0
0
13 Feb 2018
Provably efficient neural network representation for image
  classification
Provably efficient neural network representation for image classification
Yichen Huang
19
4
0
13 Nov 2017
1