ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.07996
  4. Cited By
Reset It and Forget It: Relearning Last-Layer Weights Improves Continual
  and Transfer Learning

Reset It and Forget It: Relearning Last-Layer Weights Improves Continual and Transfer Learning

12 October 2023
Lapo Frati
Neil Traft
Jeff Clune
Nick Cheney
    CLL
ArXivPDFHTML

Papers citing "Reset It and Forget It: Relearning Last-Layer Weights Improves Continual and Transfer Learning"

3 / 3 papers shown
Title
The Primacy Bias in Deep Reinforcement Learning
The Primacy Bias in Deep Reinforcement Learning
Evgenii Nikishin
Max Schwarzer
P. DÓro
Pierre-Luc Bacon
Aaron C. Courville
OnRL
93
180
0
16 May 2022
Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning
Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning
Yinbo Chen
Zhuang Liu
Huijuan Xu
Trevor Darrell
Xiaolong Wang
175
340
0
09 Mar 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
332
11,684
0
09 Mar 2017
1