ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.01421
  4. Cited By
First-order Adversarial Vulnerability of Neural Networks and Input
  Dimension

First-order Adversarial Vulnerability of Neural Networks and Input Dimension

5 February 2018
Carl-Johann Simon-Gabriel
Yann Ollivier
Léon Bottou
Bernhard Schölkopf
David Lopez-Paz
    AAML
ArXivPDFHTML

Papers citing "First-order Adversarial Vulnerability of Neural Networks and Input Dimension"

8 / 8 papers shown
Title
A Learning Paradigm for Interpretable Gradients
A Learning Paradigm for Interpretable Gradients
Felipe Figueroa
Hanwei Zhang
R. Sicre
Yannis Avrithis
Stéphane Ayache
FAtt
23
0
0
23 Apr 2024
Relating Adversarially Robust Generalization to Flat Minima
Relating Adversarially Robust Generalization to Flat Minima
David Stutz
Matthias Hein
Bernt Schiele
OOD
32
65
0
09 Apr 2021
Investigating Vulnerability to Adversarial Examples on Multimodal Data
  Fusion in Deep Learning
Investigating Vulnerability to Adversarial Examples on Multimodal Data Fusion in Deep Learning
Youngjoon Yu
Hong Joo Lee
Byeong Cheon Kim
Jung Uk Kim
Yong Man Ro
AAML
38
18
0
22 May 2020
Your Classifier is Secretly an Energy Based Model and You Should Treat
  it Like One
Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One
Will Grathwohl
Kuan-Chieh Jackson Wang
J. Jacobsen
David Duvenaud
Mohammad Norouzi
Kevin Swersky
VLM
43
528
0
06 Dec 2019
Scaleable input gradient regularization for adversarial robustness
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
16
77
0
27 May 2019
A Kernel Perspective for Regularizing Deep Neural Networks
A Kernel Perspective for Regularizing Deep Neural Networks
A. Bietti
Grégoire Mialon
Dexiong Chen
Julien Mairal
11
15
0
30 Sep 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of
  Evasion and Poisoning Attacks
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILM
AAML
19
11
0
08 Sep 2018
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
40
1,388
0
08 Dec 2017
1