ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.03483
  4. Cited By
Few-Shot Learning by Dimensionality Reduction in Gradient Space

Few-Shot Learning by Dimensionality Reduction in Gradient Space

7 June 2022
M. Gauch
M. Beck
Thomas Adler
D. Kotsur
Stefan Fiel
Hamid Eghbalzadeh
Johannes Brandstetter
Johannes Kofler
Markus Holzleitner
Werner Zellinger
D. Klotz
Sepp Hochreiter
Sebastian Lehner
ArXivPDFHTML

Papers citing "Few-Shot Learning by Dimensionality Reduction in Gradient Space"

10 / 10 papers shown
Title
One Initialization to Rule them All: Fine-tuning via Explained Variance
  Adaptation
One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation
Fabian Paischer
Lukas Hauzenberger
Thomas Schmied
Benedikt Alkin
Marc Peter Deisenroth
Sepp Hochreiter
31
4
0
09 Oct 2024
PMSS: Pretrained Matrices Skeleton Selection for LLM Fine-tuning
PMSS: Pretrained Matrices Skeleton Selection for LLM Fine-tuning
Qibin Wang
Xiaolin Hu
Weikai Xu
Wei Liu
Jian Luan
Bin Wang
28
1
0
25 Sep 2024
Does SGD really happen in tiny subspaces?
Does SGD really happen in tiny subspaces?
Minhak Song
Kwangjun Ahn
Chulhee Yun
71
4
1
25 May 2024
PARMESAN: Parameter-Free Memory Search and Transduction for Dense Prediction Tasks
PARMESAN: Parameter-Free Memory Search and Transduction for Dense Prediction Tasks
Philip Matthias Winter
M. Wimmer
David Major
Dimitrios Lenis
Astrid Berg
Theresa Neubauer
Gaia Romana De Paolis
Johannes Novotny
Sophia Ulonska
Katja Bühler
37
0
0
18 Mar 2024
SymbolicAI: A framework for logic-based approaches combining generative
  models and solvers
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
Marius-Constantin Dinu
Claudiu Leoveanu-Condrei
Markus Holzleitner
Werner Zellinger
Sepp Hochreiter
43
10
0
01 Feb 2024
Identifying Policy Gradient Subspaces
Identifying Policy Gradient Subspaces
Jan Schneider-Barnes
Pierre Schumacher
Simon Guist
Tianyu Cui
Daniel Haeufle
Bernhard Scholkopf
Le Chen
36
5
0
12 Jan 2024
Fine-tuning Happens in Tiny Subspaces: Exploring Intrinsic Task-specific
  Subspaces of Pre-trained Language Models
Fine-tuning Happens in Tiny Subspaces: Exploring Intrinsic Task-specific Subspaces of Pre-trained Language Models
Zhong Zhang
Bang Liu
Junming Shao
28
6
0
27 May 2023
Few-shot Adaptation for Manipulating Granular Materials Under Domain
  Shift
Few-shot Adaptation for Manipulating Granular Materials Under Domain Shift
Yifan Zhu
Pranay Thangeda
Melkior Ornik
Kris K. Hauser
48
9
0
06 Mar 2023
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness
  of MAML
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
Aniruddh Raghu
M. Raghu
Samy Bengio
Oriol Vinyals
177
639
0
19 Sep 2019
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
338
11,684
0
09 Mar 2017
1