Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2207.04129
Cited By
How many perturbations break this model? Evaluating robustness beyond adversarial accuracy
8 July 2022
R. Olivier
Bhiksha Raj
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How many perturbations break this model? Evaluating robustness beyond adversarial accuracy"
6 / 6 papers shown
Title
The Vulnerability of Language Model Benchmarks: Do They Accurately Reflect True LLM Performance?
Sourav Banerjee
Ayushi Agarwal
Eishkaran Singh
ELM
73
2
0
02 Dec 2024
An Analytic Solution to Covariance Propagation in Neural Networks
Oren Wright
Yorie Nakahira
José M. F. Moura
21
5
0
24 Mar 2024
Exploring the Adversarial Frontier: Quantifying Robustness via Adversarial Hypervolume
Ping Guo
Cheng Gong
Xi Lin
Zhiyuan Yang
Qingfu Zhang
AAML
31
2
0
08 Mar 2024
GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models
Zaitang Li
Pin-Yu Chen
Tsung-Yi Ho
AAML
DiffM
32
4
0
19 Apr 2023
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
Hanxun Huang
Yisen Wang
S. Erfani
Quanquan Gu
James Bailey
Xingjun Ma
AAML
TPM
46
100
0
07 Oct 2021
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
234
678
0
19 Oct 2020
1