ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.09080
  4. Cited By
Poisoned classifiers are not only backdoored, they are fundamentally
  broken

Poisoned classifiers are not only backdoored, they are fundamentally broken

18 October 2020
Mingjie Sun
Siddhant Agarwal
J. Zico Kolter
ArXivPDFHTML

Papers citing "Poisoned classifiers are not only backdoored, they are fundamentally broken"

6 / 6 papers shown
Title
Trigger Hunting with a Topological Prior for Trojan Detection
Trigger Hunting with a Topological Prior for Trojan Detection
Xiaoling Hu
Xiaoyu Lin
Michael Cogswell
Yi Yao
Susmit Jha
Chao Chen
AAML
24
46
0
15 Oct 2021
Backdoor Attacks on Self-Supervised Learning
Backdoor Attacks on Self-Supervised Learning
Aniruddha Saha
Ajinkya Tejankar
Soroush Abbasi Koohpayegani
Hamed Pirsiavash
SSL
AAML
27
101
0
21 May 2021
Manipulating SGD with Data Ordering Attacks
Manipulating SGD with Data Ordering Attacks
Ilia Shumailov
Zakhar Shumaylov
Dmitry Kazhdan
Yiren Zhao
Nicolas Papernot
Murat A. Erdogdu
Ross J. Anderson
AAML
112
91
0
19 Apr 2021
TOP: Backdoor Detection in Neural Networks via Transferability of
  Perturbation
TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation
Todd P. Huster
E. Ekwedike
SILM
36
19
0
18 Mar 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
27
270
0
18 Dec 2020
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
296
3,112
0
04 Nov 2016
1