ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.04599
  4. Cited By
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial
  Perturbations

Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations

11 February 2020
Florian Tramèr
Jens Behrmann
Nicholas Carlini
Nicolas Papernot
J. Jacobsen
    AAML
    SILM
ArXivPDFHTML

Papers citing "Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations"

30 / 30 papers shown
Title
OR-Bench: An Over-Refusal Benchmark for Large Language Models
OR-Bench: An Over-Refusal Benchmark for Large Language Models
Justin Cui
Wei-Lin Chiang
Ion Stoica
Cho-Jui Hsieh
ALM
55
41
0
31 May 2024
Training Image Derivatives: Increased Accuracy and Universal Robustness
Training Image Derivatives: Increased Accuracy and Universal Robustness
V. Avrutskiy
51
0
0
21 Oct 2023
Assessing Robustness via Score-Based Adversarial Image Generation
Assessing Robustness via Score-Based Adversarial Image Generation
Marcel Kollovieh
Lukas Gosch
Yan Scholten
Marten Lienen
Leo Schwinn
Stephan Günnemann
DiffM
72
5
0
06 Oct 2023
Adversarial Illusions in Multi-Modal Embeddings
Adversarial Illusions in Multi-Modal Embeddings
Tingwei Zhang
Rishi Jha
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
45
10
0
22 Aug 2023
Robustified ANNs Reveal Wormholes Between Human Category Percepts
Robustified ANNs Reveal Wormholes Between Human Category Percepts
Guy Gaziv
Michael J. Lee
J. DiCarlo
AAML
29
6
0
14 Aug 2023
Revisiting Robustness in Graph Machine Learning
Revisiting Robustness in Graph Machine Learning
Lukas Gosch
Daniel Sturm
Simon Geisler
Stephan Günnemann
AAML
OOD
80
22
0
01 May 2023
Certified Invertibility in Neural Networks via Mixed-Integer Programming
Certified Invertibility in Neural Networks via Mixed-Integer Programming
Tianqi Cui
Tom S. Bertalan
George J. Pappas
M. Morari
Ioannis G. Kevrekidis
Mahyar Fazlyab
AAML
37
2
0
27 Jan 2023
Invariance-Aware Randomized Smoothing Certificates
Invariance-Aware Randomized Smoothing Certificates
Jan Schuchardt
Stephan Günnemann
AAML
30
5
0
25 Nov 2022
Towards Good Practices in Evaluating Transfer Adversarial Attacks
Towards Good Practices in Evaluating Transfer Adversarial Attacks
Zhengyu Zhao
Hanwei Zhang
Renjue Li
R. Sicre
Laurent Amsaleg
Michael Backes
AAML
35
20
0
17 Nov 2022
Scaling Adversarial Training to Large Perturbation Bounds
Scaling Adversarial Training to Large Perturbation Bounds
Sravanti Addepalli
Samyak Jain
Gaurang Sriramanan
R. Venkatesh Babu
AAML
67
22
0
18 Oct 2022
On the Limitations of Stochastic Pre-processing Defenses
On the Limitations of Stochastic Pre-processing Defenses
Yue Gao
Ilia Shumailov
Kassem Fawaz
Nicolas Papernot
AAML
SILM
66
31
0
19 Jun 2022
Adversarially trained neural representations may already be as robust as
  corresponding biological neural representations
Adversarially trained neural representations may already be as robust as corresponding biological neural representations
Chong Guo
Michael J. Lee
Guillaume Leclerc
Joel Dapello
Yug Rao
Aleksander Madry
J. DiCarlo
GAN
AAML
21
13
0
19 Jun 2022
Exact Feature Collisions in Neural Networks
Exact Feature Collisions in Neural Networks
Utku Ozbulak
Manvel Gasparyan
Shodhan Rao
W. D. Neve
Arnout Van Messem
AAML
37
1
0
31 May 2022
When adversarial examples are excusable
When adversarial examples are excusable
Pieter-Jan Kindermans
Charles Staats
AAML
32
0
0
25 Apr 2022
A Survey of Robust Adversarial Training in Pattern Recognition:
  Fundamental, Theory, and Methodologies
A Survey of Robust Adversarial Training in Pattern Recognition: Fundamental, Theory, and Methodologies
Zhuang Qian
Kaizhu Huang
Qiufeng Wang
Xu-Yao Zhang
OOD
AAML
ObjD
58
72
0
26 Mar 2022
Robustness and Accuracy Could Be Reconcilable by (Proper) Definition
Robustness and Accuracy Could Be Reconcilable by (Proper) Definition
Tianyu Pang
Min Lin
Xiao Yang
Junyi Zhu
Shuicheng Yan
40
120
0
21 Feb 2022
Can Adversarial Training Be Manipulated By Non-Robust Features?
Can Adversarial Training Be Manipulated By Non-Robust Features?
Lue Tao
Lei Feng
Hongxin Wei
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
155
16
0
31 Jan 2022
Constrained Gradient Descent: A Powerful and Principled Evasion Attack
  Against Neural Networks
Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Weiran Lin
Keane Lucas
Lujo Bauer
Michael K. Reiter
Mahmood Sharif
AAML
36
5
0
28 Dec 2021
Pareto Adversarial Robustness: Balancing Spatial Robustness and
  Sensitivity-based Robustness
Pareto Adversarial Robustness: Balancing Spatial Robustness and Sensitivity-based Robustness
Ke Sun
Mingjie Li
Zhouchen Lin
AAML
32
2
0
03 Nov 2021
Improving Robustness using Generated Data
Improving Robustness using Generated Data
Sven Gowal
Sylvestre-Alvise Rebuffi
Olivia Wiles
Florian Stimberg
D. A. Calian
Timothy A. Mann
41
296
0
18 Oct 2021
Bugs in our Pockets: The Risks of Client-Side Scanning
Bugs in our Pockets: The Risks of Client-Side Scanning
H. Abelson
Ross J. Anderson
S. Bellovin
Josh Benaloh
M. Blaze
...
Ronald L. Rivest
J. Schiller
B. Schneier
Vanessa J. Teague
Carmela Troncoso
81
39
0
14 Oct 2021
Calibrated Adversarial Training
Calibrated Adversarial Training
Tianjin Huang
Vlado Menkovski
Yulong Pei
Mykola Pechenizkiy
AAML
80
3
0
01 Oct 2021
Adversarial Visual Robustness by Causal Intervention
Adversarial Visual Robustness by Causal Intervention
Kaihua Tang
Ming Tao
Hanwang Zhang
CML
AAML
37
21
0
17 Jun 2021
Partial success in closing the gap between human and machine vision
Partial success in closing the gap between human and machine vision
Robert Geirhos
Kantharaju Narayanappa
Benjamin Mitzkus
Tizian Thieringer
Matthias Bethge
Felix Wichmann
Wieland Brendel
VLM
AAML
53
223
0
14 Jun 2021
Exposing Previously Undetectable Faults in Deep Neural Networks
Exposing Previously Undetectable Faults in Deep Neural Networks
Isaac Dunn
Hadrien Pouget
Daniel Kroening
T. Melham
AAML
39
28
0
01 Jun 2021
Grid Cell Path Integration For Movement-Based Visual Object Recognition
Grid Cell Path Integration For Movement-Based Visual Object Recognition
Niels Leadholm
Marcus Lewis
Subutai Ahmad
49
6
0
17 Feb 2021
Fooling the primate brain with minimal, targeted image manipulation
Fooling the primate brain with minimal, targeted image manipulation
Li-xin Yuan
Will Xiao
Giorgia Dellaferrera
Gabriel Kreiman
Francis E. H. Tay
Jiashi Feng
Margaret Livingstone
AAML
36
1
0
11 Nov 2020
Optimism in the Face of Adversity: Understanding and Improving Deep
  Learning through Adversarial Robustness
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
44
48
0
19 Oct 2020
Provable tradeoffs in adversarially robust classification
Provable tradeoffs in adversarially robust classification
Yan Sun
Hamed Hassani
David Hong
Alexander Robey
23
55
0
09 Jun 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
51
298
0
08 May 2020
1