ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.10117
  4. Cited By
DDDM: a Brain-Inspired Framework for Robust Classification

DDDM: a Brain-Inspired Framework for Robust Classification

1 May 2022
Xiyuan Chen
Xingyu Li
Yi Zhou
Tianming Yang
    AAML
    DiffM
ArXivPDFHTML

Papers citing "DDDM: a Brain-Inspired Framework for Robust Classification"

9 / 9 papers shown
Title
Square Attack: a query-efficient black-box adversarial attack via random
  search
Square Attack: a query-efficient black-box adversarial attack via random search
Maksym Andriushchenko
Francesco Croce
Nicolas Flammarion
Matthias Hein
AAML
73
983
0
29 Nov 2019
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic
  Speech Recognition
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition
Yao Qin
Nicholas Carlini
Ian Goodfellow
G. Cottrell
Colin Raffel
AAML
66
379
0
22 Mar 2019
TextBugger: Generating Adversarial Text Against Real-world Applications
TextBugger: Generating Adversarial Text Against Real-world Applications
Jinfeng Li
S. Ji
Tianyu Du
Bo Li
Ting Wang
SILM
AAML
181
737
0
13 Dec 2018
Stochastic Activation Pruning for Robust Adversarial Defense
Stochastic Activation Pruning for Robust Adversarial Defense
Guneet Singh Dhillon
Kamyar Azizzadenesheli
Zachary Chase Lipton
Jeremy Bernstein
Jean Kossaifi
Aran Khanna
Anima Anandkumar
AAML
62
546
0
05 Mar 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
185
3,180
0
01 Feb 2018
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Nicholas Carlini
D. Wagner
AAML
88
1,077
0
05 Jan 2018
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
265
12,029
0
19 Jun 2017
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
222
8,533
0
16 Aug 2016
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
420
7,658
0
03 Jul 2012
1