SAPT: A Shared Attention Framework for Parameter-Efficient Continual
  Learning of Large Language Models
v1v2v3 (latest)

SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language Models

Papers citing "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language Models"

37 / 37 papers shown
Title
Layer Normalization
Layer Normalization
435
10,541
0
21 Jul 2016

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.