ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.00121
  4. Cited By
WARP: Word-level Adversarial ReProgramming
v1v2 (latest)

WARP: Word-level Adversarial ReProgramming

1 January 2021
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
    AAML
ArXiv (abs)PDFHTML

Papers citing "WARP: Word-level Adversarial ReProgramming"

50 / 208 papers shown
Title
Structured Prompt Tuning
Structured Prompt Tuning
Chi-Liang Liu
Hung-yi Lee
Wen-tau Yih
62
3
0
24 May 2022
Dynamic Prefix-Tuning for Generative Template-based Event Extraction
Dynamic Prefix-Tuning for Generative Template-based Event Extraction
Xiao Liu
Heyan Huang
Ge Shi
Bo Wang
74
102
0
12 May 2022
Clinical Prompt Learning with Frozen Language Models
Clinical Prompt Learning with Frozen Language Models
Niall Taylor
Yi Zhang
Dan W Joyce
A. Nevado-Holgado
Andrey Kormilitzin
VLMLM&MA
63
34
0
11 May 2022
ProQA: Structural Prompt-based Pre-training for Unified Question
  Answering
ProQA: Structural Prompt-based Pre-training for Unified Question Answering
Wanjun Zhong
Yifan Gao
Ning Ding
Yujia Qin
Zhiyuan Liu
Ming Zhou
Jiahai Wang
Jian Yin
Nan Duan
85
34
0
09 May 2022
Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt
  Tuning
Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt Tuning
Xiang Chen
Lei Li
Ningyu Zhang
Chuanqi Tan
Fei Huang
Luo Si
Huajun Chen
RALMVLM
83
40
0
04 May 2022
Mixed-effects transformers for hierarchical adaptation
Mixed-effects transformers for hierarchical adaptation
Julia White
Noah D. Goodman
Robert D. Hawkins
65
2
0
03 May 2022
HPT: Hierarchy-aware Prompt Tuning for Hierarchical Text Classification
HPT: Hierarchy-aware Prompt Tuning for Hierarchical Text Classification
Zihan Wang
Peiyi Wang
Tianyu Liu
Binghuai Lin
Yunbo Cao
Zhifang Sui
Houfeng Wang
VLM
89
49
0
28 Apr 2022
SmartSales: Sales Script Extraction and Analysis from Sales Chatlog
SmartSales: Sales Script Extraction and Analysis from Sales Chatlog
Hua Liang
Tianyu Liu
Peiyi Wang
Meng-Liang Rao
Yunbo Cao
64
2
0
19 Apr 2022
Zero-shot Entity and Tweet Characterization with Designed Conditional
  Prompts and Contexts
Zero-shot Entity and Tweet Characterization with Designed Conditional Prompts and Contexts
S. Srivatsa
Tushar Mohan
Kumari Neha
Nishchay Malakar
Ponnurangam Kumaraguru
Srinath Srinivasa
69
0
0
18 Apr 2022
Incremental Prompting: Episodic Memory Prompt for Lifelong Event
  Detection
Incremental Prompting: Episodic Memory Prompt for Lifelong Event Detection
Minqian Liu
Shiyu Chang
Lifu Huang
KELMCLL
112
29
0
15 Apr 2022
Contrastive Demonstration Tuning for Pre-trained Language Models
Contrastive Demonstration Tuning for Pre-trained Language Models
Xiaozhuan Liang
Ningyu Zhang
Shuyang Cheng
Zhenru Zhang
Chuanqi Tan
Huajun Chen
VLMALMVPVLM
114
9
0
09 Apr 2022
Making Pre-trained Language Models End-to-end Few-shot Learners with
  Contrastive Prompt Tuning
Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Ziyun Xu
Chengyu Wang
Minghui Qiu
Fuli Luo
Runxin Xu
Songfang Huang
Jun Huang
VLM
105
34
0
01 Apr 2022
Exploring Visual Prompts for Adapting Large-Scale Models
Exploring Visual Prompts for Adapting Large-Scale Models
Hyojin Bahng
Ali Jahanian
S. Sankaranarayanan
Phillip Isola
VLMVPVLMLRM
116
274
0
31 Mar 2022
Towards Few-shot Entity Recognition in Document Images: A Label-aware
  Sequence-to-Sequence Framework
Towards Few-shot Entity Recognition in Document Images: A Label-aware Sequence-to-Sequence Framework
Zilong Wang
Jingbo Shang
62
13
0
30 Mar 2022
Few-Shot Learning with Siamese Networks and Label Tuning
Few-Shot Learning with Siamese Networks and Label Tuning
Thomas Müller
Guillermo Pérez-Torró
Marc Franco-Salvador
VLM
102
41
0
28 Mar 2022
On Robust Prefix-Tuning for Text Classification
On Robust Prefix-Tuning for Text Classification
Zonghan Yang
Yang Liu
VLM
87
21
0
19 Mar 2022
Prototypical Verbalizer for Prompt-based Few-shot Tuning
Prototypical Verbalizer for Prompt-based Few-shot Tuning
Ganqu Cui
Shengding Hu
Ning Ding
Longtao Huang
Zhiyuan Liu
VLM
77
99
0
18 Mar 2022
Visual-Language Navigation Pretraining via Prompt-based Environmental
  Self-exploration
Visual-Language Navigation Pretraining via Prompt-based Environmental Self-exploration
Xiwen Liang
Fengda Zhu
Lingling Li
Hang Xu
Xiaodan Liang
LM&RoVLM
63
30
0
08 Mar 2022
Pre-trained Token-replaced Detection Model as Few-shot Learner
Pre-trained Token-replaced Detection Model as Few-shot Learner
Zicheng Li
Shoushan Li
Guodong Zhou
77
9
0
07 Mar 2022
HyperPrompt: Prompt-based Task-Conditioning of Transformers
HyperPrompt: Prompt-based Task-Conditioning of Transformers
Yun He
H. Zheng
Yi Tay
Jai Gupta
Yu Du
...
Yaguang Li
Zhaoji Chen
Donald Metzler
Heng-Tze Cheng
Ed H. Chi
LRMVLM
132
93
0
01 Mar 2022
Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified
  Multilingual Prompt
Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified Multilingual Prompt
Lianzhe Huang
Shuming Ma
Dongdong Zhang
Furu Wei
Houfeng Wang
VLMLRM
98
32
0
23 Feb 2022
Prompt-Learning for Short Text Classification
Prompt-Learning for Short Text Classification
Yi Zhu
Xinke Zhou
Jipeng Qiang
Yun Li
Yunhao Yuan
Xindong Wu
VLM
86
36
0
23 Feb 2022
Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning
Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning
Pin-Yu Chen
VLM
204
64
0
22 Feb 2022
$\mathcal{Y}$-Tuning: An Efficient Tuning Paradigm for Large-Scale
  Pre-Trained Models via Label Representation Learning
Y\mathcal{Y}Y-Tuning: An Efficient Tuning Paradigm for Large-Scale Pre-Trained Models via Label Representation Learning
Yitao Liu
Chen An
Xipeng Qiu
93
18
0
20 Feb 2022
Can Machines Help Us Answering Question 16 in Datasheets, and In Turn
  Reflecting on Inappropriate Content?
Can Machines Help Us Answering Question 16 in Datasheets, and In Turn Reflecting on Inappropriate Content?
P. Schramowski
Christopher Tauchmann
Kristian Kersting
FaML
114
100
0
14 Feb 2022
Prompt-Guided Injection of Conformation to Pre-trained Protein Model
Prompt-Guided Injection of Conformation to Pre-trained Protein Model
Qiang Zhang
Zeyuan Wang
Yuqiang Han
Haoran Yu
Xurui Jin
Huajun Chen
72
3
0
07 Feb 2022
Black-box Prompt Learning for Pre-trained Language Models
Black-box Prompt Learning for Pre-trained Language Models
Shizhe Diao
Zhichao Huang
Ruijia Xu
Xuechun Li
Yong Lin
Xiao Zhou
Tong Zhang
VLMAAML
108
71
0
21 Jan 2022
Instance-aware Prompt Learning for Language Understanding and Generation
Instance-aware Prompt Learning for Language Understanding and Generation
Feihu Jin
Jinliang Lu
Jiajun Zhang
Chengqing Zong
59
33
0
18 Jan 2022
Eliciting Knowledge from Pretrained Language Models for Prototypical
  Prompt Verbalizer
Eliciting Knowledge from Pretrained Language Models for Prototypical Prompt Verbalizer
Yinyi Wei
Tong Mo
Yong Jiang
Weiping Li
Wen Zhao
VLM
139
15
0
14 Jan 2022
Black-Box Tuning for Language-Model-as-a-Service
Black-Box Tuning for Language-Model-as-a-Service
Tianxiang Sun
Yunfan Shao
Hong Qian
Xuanjing Huang
Xipeng Qiu
VLM
222
278
0
10 Jan 2022
Unified Multimodal Pre-training and Prompt-based Tuning for
  Vision-Language Understanding and Generation
Unified Multimodal Pre-training and Prompt-based Tuning for Vision-Language Understanding and Generation
Tianyi Liu
Zuxuan Wu
Wenhan Xiong
Jingjing Chen
Yu-Gang Jiang
VLMMLLM
95
10
0
10 Dec 2021
True Few-Shot Learning with Prompts -- A Real-World Perspective
True Few-Shot Learning with Prompts -- A Real-World Perspective
Timo Schick
Hinrich Schütze
VLM
124
65
0
26 Nov 2021
On Transferability of Prompt Tuning for Natural Language Processing
On Transferability of Prompt Tuning for Natural Language Processing
Yusheng Su
Xiaozhi Wang
Yujia Qin
Chi-Min Chan
Yankai Lin
...
Peng Li
Juanzi Li
Lei Hou
Maosong Sun
Jie Zhou
AAMLVLM
90
106
0
12 Nov 2021
Recent Advances in Natural Language Processing via Large Pre-Trained
  Language Models: A Survey
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
Bonan Min
Hayley L Ross
Elior Sulem
Amir Pouran Ben Veyseh
Thien Huu Nguyen
Oscar Sainz
Eneko Agirre
Ilana Heinz
Dan Roth
LM&MAVLMAI4CE
206
1,104
0
01 Nov 2021
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Cer
VLMLRM
230
290
0
15 Oct 2021
Exploring Universal Intrinsic Task Subspace via Prompt Tuning
Exploring Universal Intrinsic Task Subspace via Prompt Tuning
Yujia Qin
Xiaozhi Wang
Yusheng Su
Yankai Lin
Ning Ding
...
Juanzi Li
Lei Hou
Peng Li
Maosong Sun
Jie Zhou
VLMVPVLM
200
29
0
15 Oct 2021
Plug-Tagger: A Pluggable Sequence Labeling Framework Using Language
  Models
Plug-Tagger: A Pluggable Sequence Labeling Framework Using Language Models
Xin Zhou
Ruotian Ma
Tao Gui
Y. Tan
Qi Zhang
Xuanjing Huang
VLM
80
5
0
14 Oct 2021
Paradigm Shift in Natural Language Processing
Paradigm Shift in Natural Language Processing
Tianxiang Sun
Xiangyang Liu
Xipeng Qiu
Xuanjing Huang
227
84
0
26 Sep 2021
What Changes Can Large-scale Language Models Bring? Intensive Study on
  HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
Boseop Kim
Hyoungseok Kim
Sang-Woo Lee
Gichang Lee
Donghyun Kwak
...
Jaewook Kang
Inho Kang
Jung-Woo Ha
W. Park
Nako Sung
VLM
310
124
0
10 Sep 2021
PPT: Pre-trained Prompt Tuning for Few-shot Learning
PPT: Pre-trained Prompt Tuning for Few-shot Learning
Yuxian Gu
Xu Han
Zhiyuan Liu
Minlie Huang
VLM
203
421
0
09 Sep 2021
Continuous Entailment Patterns for Lexical Inference in Context
Continuous Entailment Patterns for Lexical Inference in Context
Martin Schmitt
Hinrich Schütze
78
3
0
08 Sep 2021
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot
  Learners
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
Ningyu Zhang
Luoqiu Li
Xiang Chen
Shumin Deng
Zhen Bi
Chuanqi Tan
Fei Huang
Huajun Chen
VLM
162
180
0
30 Aug 2021
Accurate, yet inconsistent? Consistency Analysis on Language
  Understanding Models
Accurate, yet inconsistent? Consistency Analysis on Language Understanding Models
Myeongjun Jang
D. Kwon
Thomas Lukasiewicz
84
13
0
15 Aug 2021
Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt
  Verbalizer for Text Classification
Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification
Shengding Hu
Ning Ding
Huadong Wang
Zhiyuan Liu
Jingang Wang
Juan-Zi Li
Wei Wu
Maosong Sun
VLM
110
373
0
04 Aug 2021
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods
  in Natural Language Processing
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLMSyDa
487
4,055
0
28 Jul 2021
CPM-2: Large-scale Cost-effective Pre-trained Language Models
CPM-2: Large-scale Cost-effective Pre-trained Language Models
Zhengyan Zhang
Yuxian Gu
Xu Han
Shengqi Chen
Chaojun Xiao
...
Minlie Huang
Wentao Han
Yang Liu
Xiaoyan Zhu
Maosong Sun
MoE
122
88
0
20 Jun 2021
LoRA: Low-Rank Adaptation of Large Language Models
LoRA: Low-Rank Adaptation of Large Language Models
J. E. Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Lu Wang
Weizhu Chen
OffRLAI4TSAI4CEALMAIMat
942
10,705
0
17 Jun 2021
Voice2Series: Reprogramming Acoustic Models for Time Series
  Classification
Voice2Series: Reprogramming Acoustic Models for Time Series Classification
Chao-Han Huck Yang
Yun-Yun Tsai
Pin-Yu Chen
AI4TS
125
130
0
17 Jun 2021
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis
  of Head and Prompt Tuning
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
Colin Wei
Sang Michael Xie
Tengyu Ma
169
100
0
17 Jun 2021
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Rabeeh Karimi Mahabadi
James Henderson
Sebastian Ruder
MoE
169
498
0
08 Jun 2021
Previous
12345
Next