ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.01206
  4. Cited By
Deep Probabilistic Models to Detect Data Poisoning Attacks

Deep Probabilistic Models to Detect Data Poisoning Attacks

3 December 2019
Mahesh Subedar
Nilesh A. Ahuja
R. Krishnan
I. Ndiour
Omesh Tickoo
    AAML
    TDI
ArXivPDFHTML

Papers citing "Deep Probabilistic Models to Detect Data Poisoning Attacks"

12 / 12 papers shown
Title
Probabilistic Modeling of Deep Features for Out-of-Distribution and
  Adversarial Detection
Probabilistic Modeling of Deep Features for Out-of-Distribution and Adversarial Detection
Nilesh A. Ahuja
I. Ndiour
Trushant Kalyanpur
Omesh Tickoo
OODD
39
69
0
25 Sep 2019
STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
Yansong Gao
Chang Xu
Derui Wang
Shiping Chen
Damith C. Ranasinghe
Surya Nepal
AAML
56
801
0
18 Feb 2019
Poisoning Behavioral Malware Clustering
Poisoning Behavioral Malware Clustering
Battista Biggio
Konrad Rieck
Andrea Valenza
Christian Wressnegger
Igino Corona
Giorgio Giacinto
Fabio Roli
36
152
0
25 Nov 2018
Is feature selection secure against training data poisoning?
Is feature selection secure against training data poisoning?
Huang Xiao
Battista Biggio
Gavin Brown
Giorgio Fumera
Claudia Eckert
Fabio Roli
AAML
SILM
44
423
0
21 Apr 2018
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Ali Shafahi
Wenjie Huang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
75
1,080
0
03 Apr 2018
Flipout: Efficient Pseudo-Independent Weight Perturbations on
  Mini-Batches
Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches
Yeming Wen
Paul Vicol
Jimmy Ba
Dustin Tran
Roger C. Grosse
BDL
38
308
0
12 Mar 2018
Detection of Adversarial Training Examples in Poisoning Attacks through
  Anomaly Detection
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection
Andrea Paudice
Luis Muñoz-González
András Gyorgy
Emil C. Lupu
AAML
38
145
0
08 Feb 2018
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
75
1,758
0
22 Aug 2017
Certified Defenses for Data Poisoning Attacks
Certified Defenses for Data Poisoning Attacks
Jacob Steinhardt
Pang Wei Koh
Percy Liang
AAML
73
751
0
09 Jun 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
103
1,851
0
20 May 2017
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.4K
192,638
0
10 Dec 2015
Bayesian Active Learning for Classification and Preference Learning
Bayesian Active Learning for Classification and Preference Learning
N. Houlsby
Ferenc Huszár
Zoubin Ghahramani
M. Lengyel
79
901
0
24 Dec 2011
1