ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.02026
  4. Cited By
Provable and Efficient Continual Representation Learning
v1v2 (latest)

Provable and Efficient Continual Representation Learning

3 March 2022
Yingcong Li
Mingchen Li
M. Salman Asif
Samet Oymak
    CLL
ArXiv (abs)PDFHTMLGithub (2★)

Papers citing "Provable and Efficient Continual Representation Learning"

40 / 40 papers shown
Title
On the Statistical Benefits of Curriculum Learning
On the Statistical Benefits of Curriculum Learning
Ziping Xu
Ambuj Tewari
48
9
0
13 Nov 2021
Continual Learning in the Teacher-Student Setup: Impact of Task
  Similarity
Continual Learning in the Teacher-Student Setup: Impact of Task Similarity
Sebastian Lee
Sebastian Goldt
Andrew M. Saxe
CLL
75
74
0
09 Jul 2021
Model Zoo: A Growing "Brain" That Learns Continually
Model Zoo: A Growing "Brain" That Learns Continually
Rahul Ramesh
Pratik Chaudhari
CLLFedML
81
65
0
06 Jun 2021
Representation Learning Beyond Linear Prediction Functions
Representation Learning Beyond Linear Prediction Functions
Ziping Xu
Ambuj Tewari
46
21
0
31 May 2021
Provable Benefits of Overparameterization in Model Compression: From
  Double Descent to Pruning Neural Networks
Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural Networks
Xiangyu Chang
Yingcong Li
Samet Oymak
Christos Thrampoulidis
63
51
0
16 Dec 2020
Rethinking Experience Replay: a Bag of Tricks for Continual Learning
Rethinking Experience Replay: a Bag of Tricks for Continual Learning
Pietro Buzzega
Matteo Boschini
Angelo Porrello
Simone Calderara
CLL
45
153
0
12 Oct 2020
A Theoretical Analysis of Catastrophic Forgetting through the NTK
  Overlap Matrix
A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix
T. Doan
Mehdi Abbana Bennani
Bogdan Mazoure
Guillaume Rabusseau
Pierre Alquier
CLL
70
86
0
07 Oct 2020
Functional Regularization for Representation Learning: A Unified
  Theoretical Perspective
Functional Regularization for Representation Learning: A Unified Theoretical Perspective
Siddhant Garg
Yingyu Liang
SSL
51
20
0
06 Aug 2020
Supermasks in Superposition
Supermasks in Superposition
Mitchell Wortsman
Vivek Ramanujan
Rosanne Liu
Aniruddha Kembhavi
Mohammad Rastegari
J. Yosinski
Ali Farhadi
SSLCLL
88
296
0
26 Jun 2020
Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization
  is Sufficient
Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization is Sufficient
Ankit Pensia
Shashank Rajput
Alliot Nagle
Harit Vishwakarma
Dimitris Papailiopoulos
60
104
0
14 Jun 2020
Understanding the Role of Training Regimes in Continual Learning
Understanding the Role of Training Regimes in Continual Learning
Seyed Iman Mirzadeh
Mehrdad Farajtabar
Razvan Pascanu
H. Ghasemzadeh
CLL
76
228
0
12 Jun 2020
Coresets via Bilevel Optimization for Continual Learning and Streaming
Coresets via Bilevel Optimization for Continual Learning and Streaming
Zalan Borsos
Mojmír Mutný
Andreas Krause
CLL
80
238
0
06 Jun 2020
Continual Learning with Node-Importance based Adaptive Group Sparse
  Regularization
Continual Learning with Node-Importance based Adaptive Group Sparse Regularization
Sangwon Jung
Hongjoon Ahn
Sungmin Cha
Taesup Moon
CLL
53
128
0
30 Mar 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
109
276
0
03 Feb 2020
What's Hidden in a Randomly Weighted Neural Network?
What's Hidden in a Randomly Weighted Neural Network?
Vivek Ramanujan
Mitchell Wortsman
Aniruddha Kembhavi
Ali Farhadi
Mohammad Rastegari
66
361
0
29 Nov 2019
Discovering Neural Wirings
Discovering Neural Wirings
Mitchell Wortsman
Ali Farhadi
Mohammad Rastegari
AI4CE
109
121
0
03 Jun 2019
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Hattie Zhou
Janice Lan
Rosanne Liu
J. Yosinski
UQCV
65
389
0
03 May 2019
Three scenarios for continual learning
Three scenarios for continual learning
Gido M. van de Ven
A. Tolias
CLL
99
895
0
15 Apr 2019
Provable Guarantees for Gradient-Based Meta-Learning
Provable Guarantees for Gradient-Based Meta-Learning
M. Khodak
Maria-Florina Balcan
Ameet Talwalkar
FedML
127
150
0
27 Feb 2019
Experience Replay for Continual Learning
Experience Replay for Continual Learning
David Rolnick
Arun Ahuja
Jonathan Richard Schwarz
Timothy Lillicrap
Greg Wayne
CLL
116
1,171
0
28 Nov 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
269
3,488
0
09 Mar 2018
Learning Efficient Convolutional Networks through Network Slimming
Learning Efficient Convolutional Networks through Network Slimming
Zhuang Liu
Jianguo Li
Zhiqiang Shen
Gao Huang
Shoumeng Yan
Changshui Zhang
127
2,426
0
22 Aug 2017
Lifelong Learning with Dynamically Expandable Networks
Lifelong Learning with Dynamically Expandable Networks
Jaehong Yoon
Eunho Yang
Jeongtae Lee
Sung Ju Hwang
CLL
125
1,229
0
04 Aug 2017
Channel Pruning for Accelerating Very Deep Neural Networks
Channel Pruning for Accelerating Very Deep Neural Networks
Yihui He
Xiangyu Zhang
Jian Sun
206
2,529
0
19 Jul 2017
Gradient Episodic Memory for Continual Learning
Gradient Episodic Memory for Continual Learning
David Lopez-Paz
MarcÁurelio Ranzato
VLMCLL
131
2,738
0
26 Jun 2017
Efficient Processing of Deep Neural Networks: A Tutorial and Survey
Efficient Processing of Deep Neural Networks: A Tutorial and Survey
Vivienne Sze
Yu-hsin Chen
Tien-Ju Yang
J. Emer
AAML3DV
120
3,028
0
27 Mar 2017
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
831
11,952
0
09 Mar 2017
PathNet: Evolution Channels Gradient Descent in Super Neural Networks
PathNet: Evolution Channels Gradient Descent in Super Neural Networks
Chrisantha Fernando
Dylan Banarse
Charles Blundell
Yori Zwols
David R Ha
Andrei A. Rusu
Alexander Pritzel
Daan Wierstra
75
881
0
30 Jan 2017
Overcoming catastrophic forgetting in neural networks
Overcoming catastrophic forgetting in neural networks
J. Kirkpatrick
Razvan Pascanu
Neil C. Rabinowitz
J. Veness
Guillaume Desjardins
...
A. Grabska-Barwinska
Demis Hassabis
Claudia Clopath
D. Kumaran
R. Hadsell
CLL
374
7,572
0
02 Dec 2016
SGDR: Stochastic Gradient Descent with Warm Restarts
SGDR: Stochastic Gradient Descent with Warm Restarts
I. Loshchilov
Frank Hutter
ODL
350
8,179
0
13 Aug 2016
Learning Structured Sparsity in Deep Neural Networks
Learning Structured Sparsity in Deep Neural Networks
W. Wen
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
187
2,341
0
12 Aug 2016
Learning without Forgetting
Learning without Forgetting
Zhizhong Li
Derek Hoiem
CLLOODSSL
308
4,432
0
29 Jun 2016
Progressive Neural Networks
Progressive Neural Networks
Andrei A. Rusu
Neil C. Rabinowitz
Guillaume Desjardins
Hubert Soyer
J. Kirkpatrick
Koray Kavukcuoglu
Razvan Pascanu
R. Hadsell
CLLAI4CE
83
2,465
0
15 Jun 2016
EIE: Efficient Inference Engine on Compressed Deep Neural Network
EIE: Efficient Inference Engine on Compressed Deep Neural Network
Song Han
Xingyu Liu
Huizi Mao
Jing Pu
A. Pedram
M. Horowitz
W. Dally
129
2,461
0
04 Feb 2016
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
263
8,862
0
01 Oct 2015
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
316
6,709
0
08 Jun 2015
Fast ConvNets Using Group-wise Brain Damage
Fast ConvNets Using Group-wise Brain Damage
V. Lebedev
Victor Lempitsky
AAML
191
449
0
08 Jun 2015
The Benefit of Multitask Representation Learning
The Benefit of Multitask Representation Learning
Andreas Maurer
Massimiliano Pontil
Bernardino Romera-Paredes
SSL
109
376
0
23 May 2015
Generating Sequences With Recurrent Neural Networks
Generating Sequences With Recurrent Neural Networks
Alex Graves
GAN
169
4,039
0
04 Aug 2013
Oracle Inequalities and Optimal Inference under Group Sparsity
Oracle Inequalities and Optimal Inference under Group Sparsity
Karim Lounici
Massimiliano Pontil
Alexandre B. Tsybakov
Sara van de Geer
303
381
0
11 Jul 2010
1