ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.17400
  4. Cited By
Improving the Robustness of Transformer-based Large Language Models with
  Dynamic Attention

Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention

29 November 2023
Lujia Shen
Yuwen Pu
Shouling Ji
Changjiang Li
Xuhong Zhang
Chunpeng Ge
Ting Wang
    AAML
ArXivPDFHTML

Papers citing "Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention"

5 / 5 papers shown
Title
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
Recent Advances in Adversarial Training for Adversarial Robustness
Recent Advances in Adversarial Training for Adversarial Robustness
Tao Bai
Jinqi Luo
Jun Zhao
B. Wen
Qian Wang
AAML
76
473
0
02 Feb 2021
On the Effectiveness of Small Input Noise for Defending Against
  Query-based Black-Box Attacks
On the Effectiveness of Small Input Noise for Defending Against Query-based Black-Box Attacks
Junyoung Byun
Hyojun Go
Changick Kim
AAML
122
18
0
13 Jan 2021
Informer: Beyond Efficient Transformer for Long Sequence Time-Series
  Forecasting
Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting
Haoyi Zhou
Shanghang Zhang
J. Peng
Shuai Zhang
Jianxin Li
Hui Xiong
Wan Zhang
AI4TS
169
3,885
0
14 Dec 2020
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
Chen Zhu
Yu Cheng
Zhe Gan
S. Sun
Tom Goldstein
Jingjing Liu
AAML
226
438
0
25 Sep 2019
1