ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.09370
  4. Cited By
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute
  Inference Attacks on Classification Models

Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models

23 January 2022
Shagufta Mehnaz
S. V. Dibbo
Ehsanul Kabir
Ninghui Li
E. Bertino
    MIACV
ArXivPDFHTML

Papers citing "Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models"

33 / 33 papers shown
Title
DeSIA: Attribute Inference Attacks Against Limited Fixed Aggregate Statistics
DeSIA: Attribute Inference Attacks Against Limited Fixed Aggregate Statistics
Yifeng Mao
Bozhidar Stevanoski
Yves-Alexandre de Montjoye
47
0
0
25 Apr 2025
RAID: An In-Training Defense against Attribute Inference Attacks in Recommender Systems
RAID: An In-Training Defense against Attribute Inference Attacks in Recommender Systems
Xiaohua Feng
Yuyuan Li
Fengyuan Yu
Ke Xiong
Junjie Fang
L. Zhang
Tianyu Du
Chaochao Chen
AAML
32
0
0
15 Apr 2025
Disparate Privacy Vulnerability: Targeted Attribute Inference Attacks and Defenses
Disparate Privacy Vulnerability: Targeted Attribute Inference Attacks and Defenses
Ehsanul Kabir
Lucas Craig
Shagufta Mehnaz
MIACV
AAML
38
0
0
05 Apr 2025
OFL: Opportunistic Federated Learning for Resource-Heterogeneous and Privacy-Aware Devices
OFL: Opportunistic Federated Learning for Resource-Heterogeneous and Privacy-Aware Devices
Yunlong Mao
Mingyang Niu
Ziqin Dang
Chengxi Li
Hanning Xia
Yuejuan Zhu
Haoyu Bian
Yuan Zhang
Jingyu Hua
Sheng Zhong
FedML
55
0
0
19 Mar 2025
Theoretical Insights in Model Inversion Robustness and Conditional Entropy Maximization for Collaborative Inference Systems
Theoretical Insights in Model Inversion Robustness and Conditional Entropy Maximization for Collaborative Inference Systems
Song Xia
Yi Yu
Wenhan Yang
Meiwen Ding
Zhuo Chen
Lingyu Duan
Alex C. Kot
Xudong Jiang
56
2
0
01 Mar 2025
TEESlice: Protecting Sensitive Neural Network Models in Trusted
  Execution Environments When Attackers have Pre-Trained Models
TEESlice: Protecting Sensitive Neural Network Models in Trusted Execution Environments When Attackers have Pre-Trained Models
Ding Li
Ziqi Zhang
Mengyu Yao
Y. Cai
Yao Guo
Xiangqun Chen
FedML
37
2
0
15 Nov 2024
Privacy Evaluation Benchmarks for NLP Models
Wei Huang
Yinggui Wang
Cen Chen
ELM
SILM
24
1
0
24 Sep 2024
Understanding Data Importance in Machine Learning Attacks: Does Valuable
  Data Pose Greater Harm?
Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm?
Rui Wen
Michael Backes
Yang Zhang
TDI
AAML
41
0
0
05 Sep 2024
Analyzing Inference Privacy Risks Through Gradients in Machine Learning
Analyzing Inference Privacy Risks Through Gradients in Machine Learning
Zhuohang Li
Andrew Lowy
Jing Liu
T. Koike-Akino
K. Parsons
Bradley Malin
Ye Wang
FedML
32
1
0
29 Aug 2024
Inference Attacks: A Taxonomy, Survey, and Promising Directions
Inference Attacks: A Taxonomy, Survey, and Promising Directions
Feng Wu
Lei Cui
Shaowen Yao
Shui Yu
39
2
0
04 Jun 2024
Better Membership Inference Privacy Measurement through Discrepancy
Better Membership Inference Privacy Measurement through Discrepancy
Ruihan Wu
Pengrun Huang
Kamalika Chaudhuri
MIACV
32
0
0
24 May 2024
Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity
Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity
Hanlin Gu
W. Ong
Chee Seng Chan
Lixin Fan
MU
31
7
0
23 May 2024
Improving Robustness to Model Inversion Attacks via Sparse Coding
  Architectures
Improving Robustness to Model Inversion Attacks via Sparse Coding Architectures
S. V. Dibbo
Adam Breuer
Juston S. Moore
Michael Teti
AAML
35
4
0
21 Mar 2024
State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey
State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey
Chaoyu Zhang
Shaoyu Li
AILaw
48
3
0
25 Feb 2024
A Survey on Decentralized Identifiers and Verifiable Credentials
A Survey on Decentralized Identifiers and Verifiable Credentials
Carlo Mazzocca
Abbas Acar
Selcuk Uluagac
R. Montanari
Paolo Bellavista
Mauro Conti
43
15
0
04 Feb 2024
SoK: Unintended Interactions among Machine Learning Defenses and Risks
SoK: Unintended Interactions among Machine Learning Defenses and Risks
Vasisht Duddu
S. Szyller
Nadarajah Asokan
AAML
47
2
0
07 Dec 2023
When Machine Learning Models Leak: An Exploration of Synthetic Training
  Data
When Machine Learning Models Leak: An Exploration of Synthetic Training Data
Manel Slokom
Peter-Paul de Wolf
Martha Larson
MIACV
30
1
0
12 Oct 2023
No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN
  Partition for On-Device ML
No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN Partition for On-Device ML
Ziqi Zhang
Chen Gong
Yifeng Cai
Yuanyuan Yuan
Bingyan Liu
Ding Li
Yao Guo
Xiangqun Chen
FedML
37
16
0
11 Oct 2023
Chameleon: Increasing Label-Only Membership Leakage with Adaptive
  Poisoning
Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning
Harsh Chaudhari
Giorgio Severi
Alina Oprea
Jonathan R. Ullman
25
5
0
05 Oct 2023
Membership inference attack with relative decision boundary distance
Membership inference attack with relative decision boundary distance
Jiacheng Xu
Chengxiang Tan
21
1
0
07 Jun 2023
Does Black-box Attribute Inference Attacks on Graph Neural Networks
  Constitute Privacy Risk?
Does Black-box Attribute Inference Attacks on Graph Neural Networks Constitute Privacy Risk?
Iyiola E. Olatunji
Anmar Hizber
Oliver Sihlovec
Megha Khosla
AAML
17
6
0
01 Jun 2023
Privacy Protectability: An Information-theoretical Approach
Privacy Protectability: An Information-theoretical Approach
Siping Shi
Bihai Zhang
Dan Wang
23
1
0
25 May 2023
Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in
  Self-supervised Learning
Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning
Casey Meehan
Florian Bordes
Pascal Vincent
Kamalika Chaudhuri
Chuan Guo
32
18
0
26 Apr 2023
A Privacy-Preserving Energy Theft Detection Model for Effective
  Demand-Response Management in Smart Grids
A Privacy-Preserving Energy Theft Detection Model for Effective Demand-Response Management in Smart Grids
Arwa Alromih
John A. Clark
P. Gope
29
3
0
23 Mar 2023
Class Attribute Inference Attacks: Inferring Sensitive Class Information
  by Diffusion-Based Attribute Manipulations
Class Attribute Inference Attacks: Inferring Sensitive Class Information by Diffusion-Based Attribute Manipulations
Lukas Struppek
Dominik Hintersdorf
Felix Friedrich
Manuel Brack
P. Schramowski
Kristian Kersting
MIACV
25
2
0
16 Mar 2023
Purifier: Defending Data Inference Attacks via Transforming Confidence
  Scores
Purifier: Defending Data Inference Attacks via Transforming Confidence Scores
Ziqi Yang
Li-Juan Wang
D. Yang
Jie Wan
Ziming Zhao
E. Chang
Fan Zhang
Kui Ren
AAML
19
15
0
01 Dec 2022
How Does a Deep Learning Model Architecture Impact Its Privacy? A
  Comprehensive Study of Privacy Attacks on CNNs and Transformers
How Does a Deep Learning Model Architecture Impact Its Privacy? A Comprehensive Study of Privacy Attacks on CNNs and Transformers
Guangsheng Zhang
B. Liu
Huan Tian
Tianqing Zhu
Ming Ding
Wanlei Zhou
PILM
MIACV
12
5
0
20 Oct 2022
Attribute Inference Attacks in Online Multiplayer Video Games: a Case
  Study on Dota2
Attribute Inference Attacks in Online Multiplayer Video Games: a Case Study on Dota2
Pier Paolo Tricomi
Lisa Facciolo
Giovanni Apruzzese
Mauro Conti
23
7
0
17 Oct 2022
FedDef: Defense Against Gradient Leakage in Federated Learning-based
  Network Intrusion Detection Systems
FedDef: Defense Against Gradient Leakage in Federated Learning-based Network Intrusion Detection Systems
Jiahui Chen
Yi Zhao
Qi Li
Xuewei Feng
Ke Xu
AAML
FedML
25
13
0
08 Oct 2022
Are Attribute Inference Attacks Just Imputation?
Are Attribute Inference Attacks Just Imputation?
Bargav Jayaraman
David E. Evans
TDI
MIACV
28
46
0
02 Sep 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
30
106
0
31 Mar 2022
Adversarial Patterns: Building Robust Android Malware Classifiers
Adversarial Patterns: Building Robust Android Malware Classifiers
Dipkamal Bhusal
Nidhi Rastogi
AAML
21
1
0
04 Mar 2022
Correlation inference attacks against machine learning models
Correlation inference attacks against machine learning models
Ana-Maria Creţu
Florent Guépin
Yves-Alexandre de Montjoye
MIACV
AAML
38
5
0
16 Dec 2021
1