ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.15187
  4. Cited By
A Study on Efficiency in Continual Learning Inspired by Human Learning

A Study on Efficiency in Continual Learning Inspired by Human Learning

28 October 2020
Philip J. Ball
Yingzhen Li
A. Lamb
Cheng Zhang
    CLL
ArXivPDFHTML

Papers citing "A Study on Efficiency in Continual Learning Inspired by Human Learning"

6 / 6 papers shown
Title
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
258
387
0
05 Mar 2020
Compacting, Picking and Growing for Unforgetting Continual Learning
Compacting, Picking and Growing for Unforgetting Continual Learning
Steven C. Y. Hung
Cheng-Hao Tu
Cheng-En Wu
Chien-Hung Chen
Yi-Ming Chan
Chu-Song Chen
CLL
52
308
0
15 Oct 2019
PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning
PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning
Arun Mallya
Svetlana Lazebnik
CLL
103
1,296
0
15 Nov 2017
Progressive Neural Networks
Progressive Neural Networks
Andrei A. Rusu
Neil C. Rabinowitz
Guillaume Desjardins
Hubert Soyer
J. Kirkpatrick
Koray Kavukcuoglu
Razvan Pascanu
R. Hadsell
CLL
AI4CE
77
2,438
0
15 Jun 2016
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
279
6,660
0
08 Jun 2015
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
1.3K
100,213
0
04 Sep 2014
1