ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.14026
  4. Cited By
Subpopulation Data Poisoning Attacks

Subpopulation Data Poisoning Attacks

24 June 2020
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
    AAML
    SILM
ArXivPDFHTML

Papers citing "Subpopulation Data Poisoning Attacks"

22 / 22 papers shown
Title
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning
Shihua Sun
Shridatt Sugrim
Angelos Stavrou
Haining Wang
AAML
63
1
0
13 Jul 2024
TrustGuard: GNN-based Robust and Explainable Trust Evaluation with
  Dynamicity Support
TrustGuard: GNN-based Robust and Explainable Trust Evaluation with Dynamicity Support
Jie Wang
Zheng Yan
Jiahe Lan
Elisa Bertino
Witold Pedrycz
25
11
0
23 Jun 2023
Measuring Equality in Machine Learning Security Defenses: A Case Study
  in Speech Recognition
Measuring Equality in Machine Learning Security Defenses: A Case Study in Speech Recognition
Luke E. Richards
Edward Raff
Cynthia Matuszek
AAML
16
2
0
17 Feb 2023
FairRoad: Achieving Fairness for Recommender Systems with Optimized
  Antidote Data
FairRoad: Achieving Fairness for Recommender Systems with Optimized Antidote Data
Minghong Fang
Jia-Wei Liu
Michinari Momma
Yi Sun
27
4
0
13 Dec 2022
On Optimal Learning Under Targeted Data Poisoning
On Optimal Learning Under Targeted Data Poisoning
Steve Hanneke
Amin Karbasi
Mohammad Mahmoody
Idan Mehalel
Shay Moran
AAML
FedML
30
7
0
06 Oct 2022
Data Isotopes for Data Provenance in DNNs
Data Isotopes for Data Provenance in DNNs
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
20
12
0
29 Aug 2022
SNAP: Efficient Extraction of Private Properties with Poisoning
SNAP: Efficient Extraction of Private Properties with Poisoning
Harsh Chaudhari
John Abascal
Alina Oprea
Matthew Jagielski
Florian Tramèr
Jonathan R. Ullman
MIACV
34
30
0
25 Aug 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
  Contrastive Learning
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
25
34
0
13 May 2022
Subverting Fair Image Search with Generative Adversarial Perturbations
Subverting Fair Image Search with Generative Adversarial Perturbations
A. Ghosh
Matthew Jagielski
Chris L. Wilson
22
7
0
05 May 2022
Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient
  Ensembling
Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling
Kiyoon Yoo
Nojun Kwak
SILM
AAML
FedML
19
19
0
29 Apr 2022
Machine Learning Security against Data Poisoning: Are We There Yet?
Machine Learning Security against Data Poisoning: Are We There Yet?
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
24
35
0
12 Apr 2022
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine
  Learning
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine Learning
A. Mondal
Harpreet Virk
Debayan Gupta
34
15
0
06 Feb 2022
Securing Federated Sensitive Topic Classification against Poisoning
  Attacks
Securing Federated Sensitive Topic Classification against Poisoning Attacks
Tianyue Chu
Álvaro García-Recuero
Costas Iordanou
Georgios Smaragdakis
Nikolaos Laoutaris
38
9
0
31 Jan 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence
  Estimation
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Zayd Hammoudeh
Daniel Lowd
TDI
21
28
0
25 Jan 2022
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison
  Linear Classifiers?
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
Antonio Emanuele Cinà
Sebastiano Vascon
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
19
9
0
23 Mar 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
18
270
0
18 Dec 2020
Data Appraisal Without Data Sharing
Data Appraisal Without Data Sharing
Mimee Xu
L. V. D. van der Maaten
Awni Y. Hannun
TDI
36
6
0
11 Dec 2020
MalFox: Camouflaged Adversarial Malware Example Generation Based on
  Conv-GANs Against Black-Box Detectors
MalFox: Camouflaged Adversarial Malware Example Generation Based on Conv-GANs Against Black-Box Detectors
Fangtian Zhong
Xiuzhen Cheng
Dongxiao Yu
Bei Gong
Shuaiwen Leon Song
Jiguo Yu
AAML
43
29
0
03 Nov 2020
Concealed Data Poisoning Attacks on NLP Models
Concealed Data Poisoning Attacks on NLP Models
Eric Wallace
Tony Zhao
Shi Feng
Sameer Singh
SILM
16
18
0
23 Oct 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
220
0
21 Jul 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
46
298
0
08 May 2020
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
297
10,220
0
16 Nov 2016
1