ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.00121
  4. Cited By
WARP: Word-level Adversarial ReProgramming
v1v2 (latest)

WARP: Word-level Adversarial ReProgramming

Annual Meeting of the Association for Computational Linguistics (ACL), 2021
1 January 2021
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
    AAML
ArXiv (abs)PDFHTML

Papers citing "WARP: Word-level Adversarial ReProgramming"

50 / 209 papers shown
Title
Know Where You're Going: Meta-Learning for Parameter-Efficient
  Fine-Tuning
Know Where You're Going: Meta-Learning for Parameter-Efficient Fine-TuningAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Mozhdeh Gheini
Xuezhe Ma
Jonathan May
113
8
0
25 May 2022
Structured Prompt Tuning
Structured Prompt Tuning
Chi-Liang Liu
Hung-yi Lee
Anuj Kumar
99
3
0
24 May 2022
Dynamic Prefix-Tuning for Generative Template-based Event Extraction
Dynamic Prefix-Tuning for Generative Template-based Event ExtractionAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Xiao Liu
Heyan Huang
Ge Shi
Bo Wang
86
111
0
12 May 2022
Clinical Prompt Learning with Frozen Language Models
Clinical Prompt Learning with Frozen Language Models
Niall Taylor
Yi Zhang
Dan W Joyce
A. Nevado-Holgado
Andrey Kormilitzin
VLMLM&MA
99
37
0
11 May 2022
ProQA: Structural Prompt-based Pre-training for Unified Question
  Answering
ProQA: Structural Prompt-based Pre-training for Unified Question AnsweringNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022
Wanjun Zhong
Yifan Gao
Ning Ding
Yujia Qin
Zhiyuan Liu
Ming Zhou
Jiahai Wang
Jian Yin
Nan Duan
123
37
0
09 May 2022
Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt
  Tuning
Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt TuningAnnual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2022
Xiang Chen
Lei Li
Ningyu Zhang
Chuanqi Tan
Fei Huang
Luo Si
Huajun Chen
RALMVLM
111
42
0
04 May 2022
Mixed-effects transformers for hierarchical adaptation
Mixed-effects transformers for hierarchical adaptationConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Julia White
Noah D. Goodman
Robert D. Hawkins
145
3
0
03 May 2022
HPT: Hierarchy-aware Prompt Tuning for Hierarchical Text Classification
HPT: Hierarchy-aware Prompt Tuning for Hierarchical Text ClassificationConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Zihan Wang
Peiyi Wang
Tianyu Liu
Binghuai Lin
Yunbo Cao
Zhifang Sui
Houfeng Wang
VLM
152
61
0
28 Apr 2022
SmartSales: Sales Script Extraction and Analysis from Sales Chatlog
SmartSales: Sales Script Extraction and Analysis from Sales Chatlog
Hua Liang
Tianyu Liu
Peiyi Wang
Meng-Liang Rao
Yunbo Cao
70
2
0
19 Apr 2022
Zero-shot Entity and Tweet Characterization with Designed Conditional
  Prompts and Contexts
Zero-shot Entity and Tweet Characterization with Designed Conditional Prompts and Contexts
S. Srivatsa
Tushar Mohan
Kumari Neha
Nishchay Malakar
Ponnurangam Kumaraguru
Srinath Srinivasa
87
0
0
18 Apr 2022
Incremental Prompting: Episodic Memory Prompt for Lifelong Event
  Detection
Incremental Prompting: Episodic Memory Prompt for Lifelong Event DetectionInternational Conference on Computational Linguistics (COLING), 2022
Minqian Liu
Shiyu Chang
Lifu Huang
KELMCLL
234
31
0
15 Apr 2022
Contrastive Demonstration Tuning for Pre-trained Language Models
Contrastive Demonstration Tuning for Pre-trained Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Xiaozhuan Liang
Ningyu Zhang
Shuyang Cheng
Zhenru Zhang
Chuanqi Tan
Huajun Chen
VLMALMVPVLM
196
10
0
09 Apr 2022
Making Pre-trained Language Models End-to-end Few-shot Learners with
  Contrastive Prompt Tuning
Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt TuningWeb Search and Data Mining (WSDM), 2022
Ziyun Xu
Chengyu Wang
Minghui Qiu
Fuli Luo
Runxin Xu
Songfang Huang
Yanjie Liang
VLM
149
40
0
01 Apr 2022
Exploring Visual Prompts for Adapting Large-Scale Models
Exploring Visual Prompts for Adapting Large-Scale Models
Hyojin Bahng
Ali Jahanian
S. Sankaranarayanan
Phillip Isola
VLMVPVLMLRM
223
328
0
31 Mar 2022
Towards Few-shot Entity Recognition in Document Images: A Label-aware
  Sequence-to-Sequence Framework
Towards Few-shot Entity Recognition in Document Images: A Label-aware Sequence-to-Sequence FrameworkFindings (Findings), 2022
Zilong Wang
Jingbo Shang
86
13
0
30 Mar 2022
Few-Shot Learning with Siamese Networks and Label Tuning
Few-Shot Learning with Siamese Networks and Label TuningAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Thomas Müller
Guillermo Pérez-Torró
Marc Franco-Salvador
VLM
150
44
0
28 Mar 2022
On Robust Prefix-Tuning for Text Classification
On Robust Prefix-Tuning for Text ClassificationInternational Conference on Learning Representations (ICLR), 2022
Zonghan Yang
Yang Liu
VLM
103
21
0
19 Mar 2022
Prototypical Verbalizer for Prompt-based Few-shot Tuning
Prototypical Verbalizer for Prompt-based Few-shot TuningAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Ganqu Cui
Shengding Hu
Ning Ding
Longtao Huang
Zhiyuan Liu
VLM
96
110
0
18 Mar 2022
Visual-Language Navigation Pretraining via Prompt-based Environmental
  Self-exploration
Visual-Language Navigation Pretraining via Prompt-based Environmental Self-explorationAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Xiwen Liang
Fengda Zhu
Lingling Li
Hang Xu
Xiaodan Liang
LM&RoVLM
85
32
0
08 Mar 2022
Pre-trained Token-replaced Detection Model as Few-shot Learner
Pre-trained Token-replaced Detection Model as Few-shot LearnerInternational Conference on Computational Linguistics (COLING), 2022
Zicheng Li
Shoushan Li
Guodong Zhou
85
10
0
07 Mar 2022
HyperPrompt: Prompt-based Task-Conditioning of Transformers
HyperPrompt: Prompt-based Task-Conditioning of TransformersInternational Conference on Machine Learning (ICML), 2022
Yun He
H. Zheng
Yi Tay
Jai Gupta
Yu Du
...
Yaguang Li
Zhaoji Chen
Donald Metzler
Heng-Tze Cheng
Ed H. Chi
LRMVLM
225
105
0
01 Mar 2022
Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified
  Multilingual Prompt
Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified Multilingual PromptConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Lianzhe Huang
Shuming Ma
Dongdong Zhang
Furu Wei
Houfeng Wang
VLMLRM
155
36
0
23 Feb 2022
Prompt-Learning for Short Text Classification
Prompt-Learning for Short Text ClassificationIEEE Transactions on Knowledge and Data Engineering (TKDE), 2022
Yi Zhu
Xinke Zhou
Jipeng Qiang
Yun Li
Yunhao Yuan
Xindong Wu
VLM
113
48
0
23 Feb 2022
Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning
Model Reprogramming: Resource-Efficient Cross-Domain Machine LearningAAAI Conference on Artificial Intelligence (AAAI), 2022
Pin-Yu Chen
VLM
292
74
0
22 Feb 2022
$\mathcal{Y}$-Tuning: An Efficient Tuning Paradigm for Large-Scale
  Pre-Trained Models via Label Representation Learning
Y\mathcal{Y}Y-Tuning: An Efficient Tuning Paradigm for Large-Scale Pre-Trained Models via Label Representation Learning
Yitao Liu
Chen An
Xipeng Qiu
153
19
0
20 Feb 2022
Can Machines Help Us Answering Question 16 in Datasheets, and In Turn
  Reflecting on Inappropriate Content?
Can Machines Help Us Answering Question 16 in Datasheets, and In Turn Reflecting on Inappropriate Content?Conference on Fairness, Accountability and Transparency (FAccT), 2022
P. Schramowski
Christopher Tauchmann
Kristian Kersting
FaML
197
135
0
14 Feb 2022
Prompt-Guided Injection of Conformation to Pre-trained Protein Model
Prompt-Guided Injection of Conformation to Pre-trained Protein Model
Qiang Zhang
Zeyuan Wang
Yuqiang Han
Haoran Yu
Xurui Jin
Huajun Chen
93
3
0
07 Feb 2022
Black-box Prompt Learning for Pre-trained Language Models
Black-box Prompt Learning for Pre-trained Language Models
Shizhe Diao
Zhichao Huang
Ruijia Xu
Xuechun Li
Yong Lin
Xiao Zhou
Tong Zhang
VLMAAML
170
82
0
21 Jan 2022
Instance-aware Prompt Learning for Language Understanding and Generation
Instance-aware Prompt Learning for Language Understanding and Generation
Feihu Jin
Jinliang Lu
Jiajun Zhang
Chengqing Zong
75
36
0
18 Jan 2022
Eliciting Knowledge from Pretrained Language Models for Prototypical
  Prompt Verbalizer
Eliciting Knowledge from Pretrained Language Models for Prototypical Prompt VerbalizerInternational Conference on Artificial Neural Networks (ICANN), 2022
Yinyi Wei
Tong Mo
Yong Jiang
Weiping Li
Wen Zhao
VLM
147
18
0
14 Jan 2022
Black-Box Tuning for Language-Model-as-a-Service
Black-Box Tuning for Language-Model-as-a-ServiceInternational Conference on Machine Learning (ICML), 2022
Tianxiang Sun
Yunfan Shao
Hong Qian
Xuanjing Huang
Xipeng Qiu
VLM
273
311
0
10 Jan 2022
Unified Multimodal Pre-training and Prompt-based Tuning for
  Vision-Language Understanding and Generation
Unified Multimodal Pre-training and Prompt-based Tuning for Vision-Language Understanding and Generation
Tianyi Liu
Zuxuan Wu
Wenhan Xiong
Jingjing Chen
Yu-Gang Jiang
VLMMLLM
125
10
0
10 Dec 2021
True Few-Shot Learning with Prompts -- A Real-World Perspective
True Few-Shot Learning with Prompts -- A Real-World PerspectiveTransactions of the Association for Computational Linguistics (TACL), 2021
Timo Schick
Hinrich Schütze
VLM
150
73
0
26 Nov 2021
On Transferability of Prompt Tuning for Natural Language Processing
On Transferability of Prompt Tuning for Natural Language ProcessingNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021
Yusheng Su
Xiaozhi Wang
Yujia Qin
Chi-Min Chan
Yankai Lin
...
Peng Li
Juanzi Li
Lei Hou
Maosong Sun
Jie Zhou
AAMLVLM
153
113
0
12 Nov 2021
Recent Advances in Natural Language Processing via Large Pre-Trained
  Language Models: A Survey
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A SurveyACM Computing Surveys (CSUR), 2021
Bonan Min
Hayley L Ross
Elior Sulem
Amir Pouran Ben Veyseh
Thien Huu Nguyen
Oscar Sainz
Eneko Agirre
Ilana Heinz
Dan Roth
LM&MAVLMAI4CE
310
1,296
0
01 Nov 2021
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Cer
VLMLRM
342
308
0
15 Oct 2021
Exploring Universal Intrinsic Task Subspace via Prompt Tuning
Exploring Universal Intrinsic Task Subspace via Prompt Tuning
Yujia Qin
Xiaozhi Wang
Yusheng Su
Yankai Lin
Ning Ding
...
Juanzi Li
Lei Hou
Peng Li
Maosong Sun
Jie Zhou
VLMVPVLM
253
29
0
15 Oct 2021
Plug-Tagger: A Pluggable Sequence Labeling Framework Using Language
  Models
Plug-Tagger: A Pluggable Sequence Labeling Framework Using Language Models
Xin Zhou
Ruotian Ma
Tao Gui
Y. Tan
Tao Gui
Xuanjing Huang
VLM
94
5
0
14 Oct 2021
Paradigm Shift in Natural Language Processing
Paradigm Shift in Natural Language ProcessingMachine Intelligence Research (MIR), 2021
Tianxiang Sun
Xiangyang Liu
Xipeng Qiu
Xuanjing Huang
316
86
0
26 Sep 2021
What Changes Can Large-scale Language Models Bring? Intensive Study on
  HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained TransformersConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Boseop Kim
Hyoungseok Kim
Sang-Woo Lee
Gichang Lee
Donghyun Kwak
...
Jaewook Kang
Inho Kang
Jung-Woo Ha
W. Park
Nako Sung
VLM
387
125
0
10 Sep 2021
PPT: Pre-trained Prompt Tuning for Few-shot Learning
PPT: Pre-trained Prompt Tuning for Few-shot LearningAnnual Meeting of the Association for Computational Linguistics (ACL), 2021
Yuxian Gu
Xu Han
Zhiyuan Liu
Minlie Huang
VLM
266
457
0
09 Sep 2021
Continuous Entailment Patterns for Lexical Inference in Context
Continuous Entailment Patterns for Lexical Inference in ContextConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Martin Schmitt
Hinrich Schütze
110
3
0
08 Sep 2021
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot
  Learners
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot LearnersInternational Conference on Learning Representations (ICLR), 2021
Ningyu Zhang
Luoqiu Li
Xiang Chen
Shumin Deng
Zhen Bi
Chuanqi Tan
Fei Huang
Huajun Chen
VLM
216
198
0
30 Aug 2021
Accurate, yet inconsistent? Consistency Analysis on Language
  Understanding Models
Accurate, yet inconsistent? Consistency Analysis on Language Understanding Models
Myeongjun Jang
D. Kwon
Thomas Lukasiewicz
114
14
0
15 Aug 2021
Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt
  Verbalizer for Text Classification
Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text ClassificationAnnual Meeting of the Association for Computational Linguistics (ACL), 2021
Shengding Hu
Ning Ding
Huadong Wang
Zhiyuan Liu
Jingang Wang
Juan-Zi Li
Wei Wu
Maosong Sun
VLM
158
406
0
04 Aug 2021
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods
  in Natural Language Processing
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language ProcessingACM Computing Surveys (CSUR), 2021
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLMSyDa
648
4,620
0
28 Jul 2021
CPM-2: Large-scale Cost-effective Pre-trained Language Models
CPM-2: Large-scale Cost-effective Pre-trained Language ModelsAI Open (AO), 2021
Zhengyan Zhang
Yuxian Gu
Xu Han
Shengqi Chen
Chaojun Xiao
...
Minlie Huang
Wentao Han
Yang Liu
Xiaoyan Zhu
Maosong Sun
MoE
160
92
0
20 Jun 2021
LoRA: Low-Rank Adaptation of Large Language Models
LoRA: Low-Rank Adaptation of Large Language ModelsInternational Conference on Learning Representations (ICLR), 2021
J. E. Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Lu Wang
Weizhu Chen
OffRLAI4TSAI4CEALMAIMat
1.3K
14,068
0
17 Jun 2021
Voice2Series: Reprogramming Acoustic Models for Time Series
  Classification
Voice2Series: Reprogramming Acoustic Models for Time Series Classification
Chao-Han Huck Yang
Yun-Yun Tsai
Pin-Yu Chen
AI4TS
153
139
0
17 Jun 2021
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis
  of Head and Prompt Tuning
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
Colin Wei
Sang Michael Xie
Tengyu Ma
220
112
0
17 Jun 2021
Previous
12345
Next