ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.12064
  4. Cited By
Attention Meets Perturbations: Robust and Interpretable Attention with
  Adversarial Training

Attention Meets Perturbations: Robust and Interpretable Attention with Adversarial Training

25 September 2020
Shunsuke Kitada
Hitoshi Iyatomi
    OOD
    AAML
ArXivPDFHTML

Papers citing "Attention Meets Perturbations: Robust and Interpretable Attention with Adversarial Training"

9 / 9 papers shown
Title
SATO: Stable Text-to-Motion Framework
SATO: Stable Text-to-Motion Framework
Wenshuo Chen
Hongru Xiao
Erhang Zhang
Lijie Hu
Lei Wang
Mengyuan Liu
Chong Chen
47
5
0
02 May 2024
Improving Prediction Performance and Model Interpretability through
  Attention Mechanisms from Basic and Applied Research Perspectives
Improving Prediction Performance and Model Interpretability through Attention Mechanisms from Basic and Applied Research Perspectives
Shunsuke Kitada
FaML
HAI
AI4CE
32
0
0
24 Mar 2023
COmic: Convolutional Kernel Networks for Interpretable End-to-End
  Learning on (Multi-)Omics Data
COmic: Convolutional Kernel Networks for Interpretable End-to-End Learning on (Multi-)Omics Data
Jonas C. Ditz
Bernhard Reuter
Nícolas Pfeifer
29
1
0
02 Dec 2022
SEAT: Stable and Explainable Attention
SEAT: Stable and Explainable Attention
Lijie Hu
Yixin Liu
Ninghao Liu
Mengdi Huai
Lichao Sun
Di Wang
OOD
32
18
0
23 Nov 2022
Improving Health Mentioning Classification of Tweets using Contrastive
  Adversarial Training
Improving Health Mentioning Classification of Tweets using Contrastive Adversarial Training
Pervaiz Iqbal Khan
Shoaib Ahmed Siddiqui
Imran Razzak
Andreas Dengel
Sheraz Ahmed
21
3
0
03 Mar 2022
Improved Text Classification via Contrastive Adversarial Training
Improved Text Classification via Contrastive Adversarial Training
Lin Pan
Chung-Wei Hang
Avirup Sil
Saloni Potdar
AAML
26
86
0
21 Jul 2021
Making Attention Mechanisms More Robust and Interpretable with Virtual
  Adversarial Training
Making Attention Mechanisms More Robust and Interpretable with Virtual Adversarial Training
Shunsuke Kitada
Hitoshi Iyatomi
AAML
28
8
0
18 Apr 2021
A Decomposable Attention Model for Natural Language Inference
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
213
1,367
0
06 Jun 2016
Effective Approaches to Attention-based Neural Machine Translation
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
218
7,926
0
17 Aug 2015
1