ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.13991
  4. Cited By
Expressive Losses for Verified Robustness via Convex Combinations

Expressive Losses for Verified Robustness via Convex Combinations

23 May 2023
Alessandro De Palma
Rudy Bunel
Krishnamurthy Dvijotham
M. P. Kumar
Robert Stanforth
A. Lomuscio
    AAML
ArXivPDFHTML

Papers citing "Expressive Losses for Verified Robustness via Convex Combinations"

29 / 29 papers shown
Title
Average Certified Radius is a Poor Metric for Randomized Smoothing
Average Certified Radius is a Poor Metric for Randomized Smoothing
Chenhao Sun
Yuhao Mao
Mark Niklas Muller
Martin Vechev
AAML
63
0
0
09 Oct 2024
On Using Certified Training towards Empirical Robustness
On Using Certified Training towards Empirical Robustness
Alessandro De Palma
Serge Durand
Zakaria Chihani
François Terrier
Caterina Urban
OOD
AAML
67
1
0
02 Oct 2024
CTBENCH: A Library and Benchmark for Certified Training
CTBENCH: A Library and Benchmark for Certified Training
Yuhao Mao
Stefan Balauca
Martin Vechev
OOD
75
5
0
07 Jun 2024
Revisiting Adversarial Training for ImageNet: Architectures, Training
  and Generalization across Threat Models
Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models
Naman D. Singh
Francesco Croce
Matthias Hein
OOD
67
65
0
03 Mar 2023
Improved techniques for deterministic l2 robustness
Improved techniques for deterministic l2 robustness
Sahil Singla
Soheil Feizi
AAML
59
10
0
15 Nov 2022
Certified Training: Small Boxes are All You Need
Certified Training: Small Boxes are All You Need
Mark Niklas Muller
Franziska Eckert
Marc Fischer
Martin Vechev
AAML
53
47
0
10 Oct 2022
Rethinking Lipschitz Neural Networks and Certified Robustness: A Boolean
  Function Perspective
Rethinking Lipschitz Neural Networks and Certified Robustness: A Boolean Function Perspective
Bohang Zhang
Du Jiang
Di He
Liwei Wang
OOD
59
50
0
04 Oct 2022
Complete Verification via Multi-Neuron Relaxation Guided
  Branch-and-Bound
Complete Verification via Multi-Neuron Relaxation Guided Branch-and-Bound
Claudio Ferrari
Mark Niklas Muller
Nikola Jovanović
Martin Vechev
67
86
0
30 Apr 2022
In Defense of the Unitary Scalarization for Deep Multi-Task Learning
In Defense of the Unitary Scalarization for Deep Multi-Task Learning
Vitaly Kurin
Alessandro De Palma
Ilya Kostrikov
Shimon Whiteson
M. P. Kumar
69
74
0
11 Jan 2022
Training Certifiably Robust Neural Networks with Efficient Local
  Lipschitz Bounds
Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds
Yujia Huang
Huan Zhang
Yuanyuan Shi
J Zico Kolter
Anima Anandkumar
71
78
0
02 Nov 2021
The Second International Verification of Neural Networks Competition
  (VNN-COMP 2021): Summary and Results
The Second International Verification of Neural Networks Competition (VNN-COMP 2021): Summary and Results
Stanley Bak
Changliu Liu
Taylor T. Johnson
NAI
73
112
0
31 Aug 2021
Adversarial for Good? How the Adversarial ML Community's Values Impede
  Socially Beneficial Uses of Attacks
Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks
Kendra Albert
Maggie K. Delano
B. Kulynych
Ramnath Kumar
AAML
105
5
0
11 Jul 2021
Orthogonalizing Convolutional Layers with the Cayley Transform
Orthogonalizing Convolutional Layers with the Cayley Transform
Asher Trockman
J. Zico Kolter
53
113
0
14 Apr 2021
PRIMA: General and Precise Neural Network Certification via Scalable
  Convex Hull Approximations
PRIMA: General and Precise Neural Network Certification via Scalable Convex Hull Approximations
Mark Niklas Muller
Gleb Makarchuk
Gagandeep Singh
Markus Püschel
Martin Vechev
61
91
0
05 Mar 2021
On the Paradox of Certified Training
On the Paradox of Certified Training
Nikola Jovanović
Mislav Balunović
Maximilian Baader
Martin Vechev
OOD
40
13
0
12 Feb 2021
Scaling the Convex Barrier with Sparse Dual Algorithms
Scaling the Convex Barrier with Sparse Dual Algorithms
Alessandro De Palma
Harkirat Singh Behl
Rudy Bunel
Philip Torr
M. P. Kumar
79
9
0
14 Jan 2021
Enabling certification of verification-agnostic networks via
  memory-efficient semidefinite programming
Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming
Sumanth Dathathri
Krishnamurthy Dvijotham
Alexey Kurakin
Aditi Raghunathan
J. Uesato
...
Shreya Shankar
Jacob Steinhardt
Ian Goodfellow
Percy Liang
Pushmeet Kohli
AAML
84
93
0
22 Oct 2020
Reliable evaluation of adversarial robustness with an ensemble of
  diverse parameter-free attacks
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
211
1,837
0
03 Mar 2020
Lagrangian Decomposition for Neural Network Verification
Lagrangian Decomposition for Neural Network Verification
Rudy Bunel
Alessandro De Palma
Alban Desmaison
Krishnamurthy Dvijotham
Pushmeet Kohli
Philip Torr
M. P. Kumar
47
50
0
24 Feb 2020
Towards Stable and Efficient Training of Verifiably Robust Neural
  Networks
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
Huan Zhang
Hongge Chen
Chaowei Xiao
Sven Gowal
Robert Stanforth
Yue Liu
Duane S. Boning
Cho-Jui Hsieh
AAML
67
346
0
14 Jun 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
130
2,028
0
08 Feb 2019
Strong mixed-integer programming formulations for trained neural
  networks
Strong mixed-integer programming formulations for trained neural networks
Ross Anderson
Joey Huchette
Christian Tjandraatmadja
J. Vielma
135
255
0
20 Nov 2018
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
J. Uesato
Brendan O'Donoghue
Aaron van den Oord
Pushmeet Kohli
AAML
145
601
0
15 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
195
3,180
0
01 Feb 2018
A Downsampled Variant of ImageNet as an Alternative to the CIFAR
  datasets
A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets
P. Chrabaszcz
I. Loshchilov
Frank Hutter
SSeg
OOD
139
645
0
27 Jul 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
277
12,029
0
19 Jun 2017
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
301
1,860
0
03 Feb 2017
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
430
43,234
0
11 Feb 2015
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
247
14,893
1
21 Dec 2013
1