ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.02883
  4. Cited By
Stronger and Faster Wasserstein Adversarial Attacks

Stronger and Faster Wasserstein Adversarial Attacks

6 August 2020
Kaiwen Wu
Allen Wang
Yaoliang Yu
    AAML
ArXivPDFHTML

Papers citing "Stronger and Faster Wasserstein Adversarial Attacks"

12 / 12 papers shown
Title
Effective and Robust Adversarial Training against Data and Label
  Corruptions
Effective and Robust Adversarial Training against Data and Label Corruptions
Pengfei Zhang
Zi Huang
Xin-Shun Xu
Guangdong Bai
51
4
0
07 May 2024
Robust Overfitting Does Matter: Test-Time Adversarial Purification With
  FGSM
Robust Overfitting Does Matter: Test-Time Adversarial Purification With FGSM
Linyu Tang
Lei Zhang
AAML
35
3
0
18 Mar 2024
Multiple Perturbation Attack: Attack Pixelwise Under Different
  $\ell_p$-norms For Better Adversarial Performance
Multiple Perturbation Attack: Attack Pixelwise Under Different ℓp\ell_pℓp​-norms For Better Adversarial Performance
Ngoc N. Tran
Anh Tuan Bui
Dinh Q. Phung
Trung Le
AAML
29
1
0
05 Dec 2022
Improving Adversarial Robustness via Mutual Information Estimation
Improving Adversarial Robustness via Mutual Information Estimation
Dawei Zhou
Nannan Wang
Xinbo Gao
Bo Han
Xiaoyu Wang
Yibing Zhan
Tongliang Liu
AAML
16
15
0
25 Jul 2022
SLOSH: Set LOcality Sensitive Hashing via Sliced-Wasserstein Embeddings
SLOSH: Set LOcality Sensitive Hashing via Sliced-Wasserstein Embeddings
Yuzhe Lu
Xinran Liu
Andrea Soltoggio
Soheil Kolouri
21
7
0
11 Dec 2021
Modeling Adversarial Noise for Adversarial Training
Modeling Adversarial Noise for Adversarial Training
Dawei Zhou
Nannan Wang
Bo Han
Tongliang Liu
AAML
38
15
0
21 Sep 2021
Training Meta-Surrogate Model for Transferable Adversarial Attack
Training Meta-Surrogate Model for Transferable Adversarial Attack
Yunxiao Qin
Yuanhao Xiong
Jinfeng Yi
Cho-Jui Hsieh
AAML
20
18
0
05 Sep 2021
Trustworthy AI: A Computational Perspective
Trustworthy AI: A Computational Perspective
Haochen Liu
Yiqi Wang
Wenqi Fan
Xiaorui Liu
Yaxin Li
Shaili Jain
Yunhao Liu
Anil K. Jain
Jiliang Tang
FaML
104
196
0
12 Jul 2021
Removing Adversarial Noise in Class Activation Feature Space
Removing Adversarial Noise in Class Activation Feature Space
Dawei Zhou
N. Wang
Chunlei Peng
Xinbo Gao
Xiaoyu Wang
Jun Yu
Tongliang Liu
AAML
30
28
0
19 Apr 2021
Achieving Adversarial Robustness Requires An Active Teacher
Achieving Adversarial Robustness Requires An Active Teacher
Chao Ma
Lexing Ying
27
1
0
14 Dec 2020
Optimism in the Face of Adversity: Understanding and Improving Deep
  Learning through Adversarial Robustness
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
29
48
0
19 Oct 2020
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
296
3,112
0
04 Nov 2016
1