Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.17400
Cited By
Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention
29 November 2023
Lujia Shen
Yuwen Pu
Shouling Ji
Changjiang Li
Xuhong Zhang
Chunpeng Ge
Ting Wang
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention"
5 / 5 papers shown
Title
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
Recent Advances in Adversarial Training for Adversarial Robustness
Tao Bai
Jinqi Luo
Jun Zhao
B. Wen
Qian Wang
AAML
76
473
0
02 Feb 2021
On the Effectiveness of Small Input Noise for Defending Against Query-based Black-Box Attacks
Junyoung Byun
Hyojun Go
Changick Kim
AAML
122
18
0
13 Jan 2021
Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting
Haoyi Zhou
Shanghang Zhang
J. Peng
Shuai Zhang
Jianxin Li
Hui Xiong
Wan Zhang
AI4TS
169
3,885
0
14 Dec 2020
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
Chen Zhu
Yu Cheng
Zhe Gan
S. Sun
Tom Goldstein
Jingjing Liu
AAML
226
438
0
25 Sep 2019
1