ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.12705
  4. Cited By
p-Meta: Towards On-device Deep Model Adaptation

p-Meta: Towards On-device Deep Model Adaptation

25 June 2022
Zhongnan Qu
Zimu Zhou
Yongxin Tong
Lothar Thiele
ArXiv (abs)PDFHTML

Papers citing "p-Meta: Towards On-device Deep Model Adaptation"

17 / 17 papers shown
Title
Learning where to learn: Gradient sparsity in meta and continual
  learning
Learning where to learn: Gradient sparsity in meta and continual learning
J. Oswald
Dominic Zhao
Seijin Kobayashi
Simon Schug
Massimo Caccia
Nicolas Zucchet
João Sacramento
CLL
73
48
0
27 Oct 2021
On-device Federated Learning with Flower
On-device Federated Learning with Flower
Akhil Mathur
Daniel J. Beutel
Pedro Porto Buarque de Gusmão
Javier Fernandez-Marques
Taner Topal
Xinchi Qiu
Titouan Parcollet
Yan Gao
Nicholas D. Lane
FedML
89
38
0
07 Apr 2021
Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot
  Learning
Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot Learning
Zhiqiang Shen
Zechun Liu
Jie Qin
Marios Savvides
Kwang-Ting Cheng
CLL
62
160
0
08 Feb 2021
Meta-Learning in Neural Networks: A Survey
Meta-Learning in Neural Networks: A Survey
Timothy M. Hospedales
Antreas Antoniou
P. Micaelli
Amos Storkey
OOD
393
1,987
0
11 Apr 2020
Low-rank Gradient Approximation For Memory-Efficient On-device Training
  of Deep Neural Network
Low-rank Gradient Approximation For Memory-Efficient On-device Training of Deep Neural Network
Mary Gooneratne
K. Sim
P. Zadrazil
Andreas Kabel
F. Beaufays
Giovanni Motta
118
24
0
24 Jan 2020
Dynamic Convolution: Attention over Convolution Kernels
Dynamic Convolution: Attention over Convolution Kernels
Yinpeng Chen
Xiyang Dai
Mengchen Liu
Dongdong Chen
Lu Yuan
Zicheng Liu
104
895
0
07 Dec 2019
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness
  of MAML
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
Aniruddh Raghu
M. Raghu
Samy Bengio
Oriol Vinyals
303
647
0
19 Sep 2019
Torchmeta: A Meta-Learning library for PyTorch
Torchmeta: A Meta-Learning library for PyTorch
T. Deleu
Tobias Würfl
Mandana Samiei
Joseph Paul Cohen
Yoshua Bengio
OffRL
74
85
0
14 Sep 2019
Group Normalization
Group Normalization
Yuxin Wu
Kaiming He
236
3,670
0
22 Mar 2018
Meta-Learning for Semi-Supervised Few-Shot Classification
Meta-Learning for Semi-Supervised Few-Shot Classification
Mengye Ren
Eleni Triantafillou
S. S. Ravi
Jake C. Snell
Kevin Swersky
J. Tenenbaum
Hugo Larochelle
R. Zemel
SSL
72
1,284
0
02 Mar 2018
Learning to Compare: Relation Network for Few-Shot Learning
Learning to Compare: Relation Network for Few-Shot Learning
Flood Sung
Yongxin Yang
Li Zhang
Tao Xiang
Philip Torr
Timothy M. Hospedales
304
4,049
0
16 Nov 2017
The Reversible Residual Network: Backpropagation Without Storing
  Activations
The Reversible Residual Network: Backpropagation Without Storing Activations
Aidan Gomez
Mengye Ren
R. Urtasun
Roger C. Grosse
74
551
0
14 Jul 2017
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
Priya Goyal
Piotr Dollár
Ross B. Girshick
P. Noordhuis
Lukasz Wesolowski
Aapo Kyrola
Andrew Tulloch
Yangqing Jia
Kaiming He
3DH
128
3,685
0
08 Jun 2017
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
827
11,943
0
09 Mar 2017
Matching Networks for One Shot Learning
Matching Networks for One Shot Learning
Oriol Vinyals
Charles Blundell
Timothy Lillicrap
Koray Kavukcuoglu
Daan Wierstra
VLM
375
7,333
0
13 Jun 2016
Memory-Efficient Backpropagation Through Time
Memory-Efficient Backpropagation Through Time
A. Gruslys
Rémi Munos
Ivo Danihelka
Marc Lanctot
Alex Graves
65
229
0
10 Jun 2016
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
263
8,859
0
01 Oct 2015
1