ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.08304
  4. Cited By
Hyperdecoders: Instance-specific decoders for multi-task NLP

Hyperdecoders: Instance-specific decoders for multi-task NLP

15 March 2022
Hamish Ivison
Matthew E. Peters
    AI4CE
ArXivPDFHTML

Papers citing "Hyperdecoders: Instance-specific decoders for multi-task NLP"

20 / 20 papers shown
Title
InterACT: Inter-dependency Aware Action Chunking with Hierarchical
  Attention Transformers for Bimanual Manipulation
InterACT: Inter-dependency Aware Action Chunking with Hierarchical Attention Transformers for Bimanual Manipulation
Andrew Lee
Ian Chuang
Ling-Yuan Chen
Iman Soltani
47
6
0
12 Sep 2024
HyperLoader: Integrating Hypernetwork-Based LoRA and Adapter Layers into
  Multi-Task Transformers for Sequence Labelling
HyperLoader: Integrating Hypernetwork-Based LoRA and Adapter Layers into Multi-Task Transformers for Sequence Labelling
Jesús-Germán Ortiz-Barajas
Helena Gómez-Adorno
Thamar Solorio
37
1
0
01 Jul 2024
Efficient Prompt Tuning by Multi-Space Projection and Prompt Fusion
Efficient Prompt Tuning by Multi-Space Projection and Prompt Fusion
Pengxiang Lan
Enneng Yang
Yuting Liu
Guibing Guo
Linying Jiang
Jianzhe Zhao
Xingwei Wang
VLM
AAML
38
1
0
19 May 2024
HyperMoE: Towards Better Mixture of Experts via Transferring Among
  Experts
HyperMoE: Towards Better Mixture of Experts via Transferring Among Experts
Hao Zhao
Zihan Qiu
Huijia Wu
Zili Wang
Zhaofeng He
Jie Fu
MoE
32
10
0
20 Feb 2024
Read to Play (R2-Play): Decision Transformer with Multimodal Game
  Instruction
Read to Play (R2-Play): Decision Transformer with Multimodal Game Instruction
Yonggang Jin
Ge Zhang
Hao Zhao
Tianyu Zheng
Jiawei Guo
Liuyu Xiang
Shawn Yue
Stephen W. Huang
Zhaofeng He
Jie Fu
OffRL
34
4
0
06 Feb 2024
Dynamics Generalisation in Reinforcement Learning via Adaptive
  Context-Aware Policies
Dynamics Generalisation in Reinforcement Learning via Adaptive Context-Aware Policies
Michael Beukman
Devon Jarvis
Richard Klein
Steven D. James
Benjamin Rosman
26
10
0
25 Oct 2023
Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning
Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning
Hao Zhao
Jie Fu
Zhaofeng He
105
6
0
18 Oct 2023
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
Zhengxiang Shi
Aldo Lipani
VLM
31
30
0
11 Sep 2023
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
Zhen Wang
Yikang Shen
Leonid Karlinsky
Rogerio Feris
Huan Sun
Yoon Kim
VLM
VPVLM
44
107
0
06 Mar 2023
HINT: Hypernetwork Instruction Tuning for Efficient Zero- & Few-Shot
  Generalisation
HINT: Hypernetwork Instruction Tuning for Efficient Zero- & Few-Shot Generalisation
Hamish Ivison
Akshita Bhagia
Yizhong Wang
Hannaneh Hajishirzi
Matthew E. Peters
48
16
0
20 Dec 2022
Boosting Natural Language Generation from Instructions with
  Meta-Learning
Boosting Natural Language Generation from Instructions with Meta-Learning
Budhaditya Deb
Guoqing Zheng
Ahmed Hassan Awadallah
24
13
0
20 Oct 2022
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures
  of Soft Prompts
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
Akari Asai
Mohammadreza Salehi
Matthew E. Peters
Hannaneh Hajishirzi
130
100
0
24 May 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
215
1,661
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
238
808
0
14 Oct 2021
Single-dataset Experts for Multi-dataset Question Answering
Single-dataset Experts for Multi-dataset Question Answering
Dan Friedman
Ben Dodge
Danqi Chen
MoMe
132
26
0
28 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,858
0
18 Apr 2021
The GEM Benchmark: Natural Language Generation, its Evaluation and
  Metrics
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
Sebastian Gehrmann
Tosin P. Adewumi
Karmanya Aggarwal
Pawan Sasanka Ammanamanchi
Aremu Anuoluwapo
...
Nishant Subramani
Wei-ping Xu
Diyi Yang
Akhila Yerukola
Jiawei Zhou
VLM
260
285
0
02 Feb 2021
Learning to Generate Task-Specific Adapters from Task Description
Learning to Generate Task-Specific Adapters from Task Description
Qinyuan Ye
Xiang Ren
115
29
0
02 Jan 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
Teaching Machines to Read and Comprehend
Teaching Machines to Read and Comprehend
Karl Moritz Hermann
Tomás Kociský
Edward Grefenstette
L. Espeholt
W. Kay
Mustafa Suleyman
Phil Blunsom
211
3,513
0
10 Jun 2015
1