ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.10046
  4. Cited By
Testing robustness of predictions of trained classifiers against
  naturally occurring perturbations
v1v2 (latest)

Testing robustness of predictions of trained classifiers against naturally occurring perturbations

21 April 2022
S. Scher
A. Trugler
    OODAAML
ArXiv (abs)PDFHTML

Papers citing "Testing robustness of predictions of trained classifiers against naturally occurring perturbations"

33 / 33 papers shown
Title
Generalized Out-of-Distribution Detection: A Survey
Generalized Out-of-Distribution Detection: A Survey
Jingkang Yang
Kaiyang Zhou
Yixuan Li
Ziwei Liu
295
940
0
21 Oct 2021
CARLA: A Python Library to Benchmark Algorithmic Recourse and
  Counterfactual Explanation Algorithms
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Martin Pawelczyk
Sascha Bielawski
J. V. D. Heuvel
Tobias Richter
Gjergji Kasneci
CML
72
105
0
02 Aug 2021
An Information-theoretic Approach to Distribution Shifts
An Information-theoretic Approach to Distribution Shifts
Marco Federici
Ryota Tomioka
Patrick Forré
OOD
72
20
0
07 Jun 2021
WILDS: A Benchmark of in-the-Wild Distribution Shifts
WILDS: A Benchmark of in-the-Wild Distribution Shifts
Pang Wei Koh
Shiori Sagawa
Henrik Marklund
Sang Michael Xie
Marvin Zhang
...
A. Kundaje
Emma Pierson
Sergey Levine
Chelsea Finn
Percy Liang
OOD
230
1,445
0
14 Dec 2020
Captum: A unified and generic model interpretability library for PyTorch
Captum: A unified and generic model interpretability library for PyTorch
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
...
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
FAtt
144
846
0
16 Sep 2020
The Intriguing Relation Between Counterfactual Explanations and
  Adversarial Examples
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples
Timo Freiesleben
GAN
98
64
0
11 Sep 2020
In Search of Lost Domain Generalization
In Search of Lost Domain Generalization
Ishaan Gulrajani
David Lopez-Paz
OOD
94
1,157
0
02 Jul 2020
Measuring Robustness to Natural Distribution Shifts in Image
  Classification
Measuring Robustness to Natural Distribution Shifts in Image Classification
Rohan Taori
Achal Dave
Vaishaal Shankar
Nicholas Carlini
Benjamin Recht
Ludwig Schmidt
OOD
123
549
0
01 Jul 2020
The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution
  Generalization
The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization
Dan Hendrycks
Steven Basart
Norman Mu
Saurav Kadavath
Frank Wang
...
Samyak Parajuli
Mike Guo
Basel Alomair
Jacob Steinhardt
Justin Gilmer
OOD
357
1,757
0
29 Jun 2020
AugMix: A Simple Data Processing Method to Improve Robustness and
  Uncertainty
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
Dan Hendrycks
Norman Mu
E. D. Cubuk
Barret Zoph
Justin Gilmer
Balaji Lakshminarayanan
OODUQCV
128
1,308
0
05 Dec 2019
Learning Model-Agnostic Counterfactual Explanations for Tabular Data
Learning Model-Agnostic Counterfactual Explanations for Tabular Data
Martin Pawelczyk
Johannes Haug
Klaus Broelemann
Gjergji Kasneci
OODCML
66
204
0
21 Oct 2019
Accurate, reliable and fast robustness evaluation
Accurate, reliable and fast robustness evaluation
Wieland Brendel
Jonas Rauber
Matthias Kümmerer
Ivan Ustyuzhaninov
Matthias Bethge
AAMLOOD
66
113
0
01 Jul 2019
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
93
1,845
0
06 May 2019
Benchmarking Neural Network Robustness to Common Corruptions and
  Perturbations
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
Dan Hendrycks
Thomas G. Dietterich
OODVLM
191
3,455
0
28 Mar 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
166
2,051
0
08 Feb 2019
A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms
A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms
Yoshua Bengio
T. Deleu
Nasim Rahaman
Nan Rosemary Ke
Sébastien Lachapelle
O. Bilaniuk
Anirudh Goyal
C. Pal
CMLOOD
117
334
0
30 Jan 2019
Explaining Explanations in AI
Explaining Explanations in AI
Brent Mittelstadt
Chris Russell
Sandra Wachter
XAI
105
667
0
04 Nov 2018
Adversarial Robustness Toolbox v1.0.0
Adversarial Robustness Toolbox v1.0.0
Maria-Irina Nicolae
M. Sinn
Minh-Ngoc Tran
Beat Buesser
Ambrish Rawat
...
Nathalie Baracaldo
Bryant Chen
Heiko Ludwig
Ian Molloy
Ben Edwards
AAMLVLM
77
460
0
03 Jul 2018
Evaluating the Robustness of Neural Networks: An Extreme Value Theory
  Approach
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach
Tsui-Wei Weng
Huan Zhang
Pin-Yu Chen
Jinfeng Yi
D. Su
Yupeng Gao
Cho-Jui Hsieh
Luca Daniel
AAML
83
468
0
31 Jan 2018
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
130
1,409
0
08 Dec 2017
Counterfactual Explanations without Opening the Black Box: Automated
  Decisions and the GDPR
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
129
2,366
0
01 Nov 2017
Feature-Guided Black-Box Safety Testing of Deep Neural Networks
Feature-Guided Black-Box Safety Testing of Deep Neural Networks
Matthew Wicker
Xiaowei Huang
Marta Kwiatkowska
AAML
46
235
0
21 Oct 2017
Security Evaluation of Pattern Classifiers under Attack
Security Evaluation of Pattern Classifiers under Attack
Battista Biggio
Giorgio Fumera
Fabio Roli
AAML
70
444
0
02 Sep 2017
Evasion Attacks against Machine Learning at Test Time
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
163
2,160
0
21 Aug 2017
Robust Physical-World Attacks on Deep Learning Models
Robust Physical-World Attacks on Deep Learning Models
Kevin Eykholt
Ivan Evtimov
Earlence Fernandes
Yue Liu
Amir Rahmati
Chaowei Xiao
Atul Prakash
Tadayoshi Kohno
Basel Alomair
AAML
83
595
0
27 Jul 2017
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
321
1,875
0
03 Feb 2017
Safety Verification of Deep Neural Networks
Safety Verification of Deep Neural Networks
Xiaowei Huang
Marta Kwiatkowska
Sen Wang
Min Wu
AAML
238
944
0
21 Oct 2016
A Boundary Tilting Persepective on the Phenomenon of Adversarial
  Examples
A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples
T. Tanay
Lewis D. Griffin
AAML
93
272
0
27 Aug 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILMAAML
547
5,910
0
08 Jul 2016
DeepFool: a simple and accurate method to fool deep neural networks
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
154
4,905
0
14 Nov 2015
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAMLGAN
282
19,129
0
20 Dec 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
284
14,968
1
21 Dec 2013
Domain Generalization via Invariant Feature Representation
Domain Generalization via Invariant Feature Representation
Krikamol Muandet
David Balduzzi
Bernhard Schölkopf
OOD
142
1,188
0
10 Jan 2013
1