ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.10787
  4. Cited By
Variational Model Inversion Attacks

Variational Model Inversion Attacks

26 January 2022
Kuan-Chieh Jackson Wang
Yanzhe Fu
Ke Li
Ashish Khisti
R. Zemel
Alireza Makhzani
    MIACV
ArXivPDFHTML

Papers citing "Variational Model Inversion Attacks"

50 / 55 papers shown
Title
PEEL the Layers and Find Yourself: Revisiting Inference-time Data Leakage for Residual Neural Networks
PEEL the Layers and Find Yourself: Revisiting Inference-time Data Leakage for Residual Neural Networks
Huzaifa Arif
K. Murugesan
Payel Das
Alex Gittens
Pin-Yu Chen
AAML
31
0
0
08 Apr 2025
From Head to Tail: Efficient Black-box Model Inversion Attack via Long-tailed Learning
From Head to Tail: Efficient Black-box Model Inversion Attack via Long-tailed Learning
Ziang Li
Hongguang Zhang
Juan Wang
Meihui Chen
Hongxin Hu
Wenzhe Yi
Xiaoyang Xu
Mengda Yang
Chenjun Ma
62
0
0
20 Mar 2025
Theoretical Insights in Model Inversion Robustness and Conditional Entropy Maximization for Collaborative Inference Systems
Theoretical Insights in Model Inversion Robustness and Conditional Entropy Maximization for Collaborative Inference Systems
Song Xia
Yi Yu
Wenhan Yang
Meiwen Ding
Zhuo Chen
Lingyu Duan
Alex C. Kot
Xudong Jiang
56
2
0
01 Mar 2025
PPO-MI: Efficient Black-Box Model Inversion via Proximal Policy Optimization
PPO-MI: Efficient Black-Box Model Inversion via Proximal Policy Optimization
Xinpeng Shou
81
0
0
21 Feb 2025
A Tale of Two Imperatives: Privacy and Explainability
A Tale of Two Imperatives: Privacy and Explainability
Supriya Manna
Niladri Sett
100
0
0
30 Dec 2024
MIBench: A Comprehensive Framework for Benchmarking Model Inversion Attack and Defense
MIBench: A Comprehensive Framework for Benchmarking Model Inversion Attack and Defense
Yixiang Qiu
Hongyao Yu
Hao Fang
Wenbo Yu
Wenbo Yu
Bin Chen
Shu-Tao Xia
Ke Xu
Ke Xu
AAML
35
1
0
07 Oct 2024
Understanding Data Importance in Machine Learning Attacks: Does Valuable
  Data Pose Greater Harm?
Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm?
Rui Wen
Michael Backes
Yang Zhang
TDI
AAML
44
0
0
05 Sep 2024
Analyzing Inference Privacy Risks Through Gradients in Machine Learning
Analyzing Inference Privacy Risks Through Gradients in Machine Learning
Zhuohang Li
Andrew Lowy
Jing Liu
T. Koike-Akino
K. Parsons
Bradley Malin
Ye Wang
FedML
38
1
0
29 Aug 2024
A Closer Look at GAN Priors: Exploiting Intermediate Features for
  Enhanced Model Inversion Attacks
A Closer Look at GAN Priors: Exploiting Intermediate Features for Enhanced Model Inversion Attacks
Yixiang Qiu
Hao Fang
Hongyao Yu
Bin Chen
Meikang Qiu
Shu-Tao Xia
AAML
47
11
0
18 Jul 2024
Model Inversion Attacks Through Target-Specific Conditional Diffusion
  Models
Model Inversion Attacks Through Target-Specific Conditional Diffusion Models
Ouxiang Li
Yanbin Hao
Zhicai Wang
Bin Zhu
Shuo Wang
Zaixi Zhang
Fuli Feng
DiffM
25
3
0
16 Jul 2024
Prediction Exposes Your Face: Black-box Model Inversion via Prediction
  Alignment
Prediction Exposes Your Face: Black-box Model Inversion via Prediction Alignment
Yufan Liu
Wanqian Zhang
Dayan Wu
Zheng-Shen Lin
Jingzi Gu
Weiping Wang
53
1
0
11 Jul 2024
Reconstructing training data from document understanding models
Reconstructing training data from document understanding models
Jérémie Dentan
Arnaud Paran
A. Shabou
AAML
SyDa
49
1
0
05 Jun 2024
Inference Attacks: A Taxonomy, Survey, and Promising Directions
Inference Attacks: A Taxonomy, Survey, and Promising Directions
Feng Wu
Lei Cui
Shaowen Yao
Shui Yu
52
2
0
04 Jun 2024
Model Inversion Robustness: Can Transfer Learning Help?
Model Inversion Robustness: Can Transfer Learning Help?
Sy-Tuyen Ho
Koh Jun Hao
Keshigeyan Chandrasegaran
Ngoc-Bao Nguyen
Ngai-man Cheung
45
8
0
09 May 2024
Distributional Black-Box Model Inversion Attack with Multi-Agent
  Reinforcement Learning
Distributional Black-Box Model Inversion Attack with Multi-Agent Reinforcement Learning
Huan Bao
Kaimin Wei
Yongdong Wu
Jin Qian
Robert H. Deng
43
0
0
22 Apr 2024
Is Retain Set All You Need in Machine Unlearning? Restoring Performance
  of Unlearned Models with Out-Of-Distribution Images
Is Retain Set All You Need in Machine Unlearning? Restoring Performance of Unlearned Models with Out-Of-Distribution Images
Jacopo Bonato
Marco Cotogni
Luigi Sabetta
MU
CLL
42
4
0
19 Apr 2024
Knowledge Distillation-Based Model Extraction Attack using Private
  Counterfactual Explanations
Knowledge Distillation-Based Model Extraction Attack using Private Counterfactual Explanations
Fatima Ezzeddine
Omran Ayoub
Silvia Giordano
AAML
MIACV
45
0
0
04 Apr 2024
Privacy Re-identification Attacks on Tabular GANs
Privacy Re-identification Attacks on Tabular GANs
Abdallah Alshantti
Adil Rasheed
Frank Westad
AAML
27
3
0
31 Mar 2024
MisGUIDE : Defense Against Data-Free Deep Learning Model Extraction
MisGUIDE : Defense Against Data-Free Deep Learning Model Extraction
Mahendra Gurve
S. Behera
Satyadev Ahlawat
Yamuna Prasad
MIACV
AAML
29
0
0
27 Mar 2024
Improving Robustness to Model Inversion Attacks via Sparse Coding
  Architectures
Improving Robustness to Model Inversion Attacks via Sparse Coding Architectures
S. V. Dibbo
Adam Breuer
Juston S. Moore
Michael Teti
AAML
41
4
0
21 Mar 2024
Adaptive Hybrid Masking Strategy for Privacy-Preserving Face Recognition
  Against Model Inversion Attack
Adaptive Hybrid Masking Strategy for Privacy-Preserving Face Recognition Against Model Inversion Attack
Yinggui Wang
Yuanqing Huang
Jianshu Li
Le Yang
Kai Song
Lei Wang
AAML
PICV
56
0
0
14 Mar 2024
Recent Advances, Applications, and Open Challenges in Machine Learning
  for Health: Reflections from Research Roundtables at ML4H 2023 Symposium
Recent Advances, Applications, and Open Challenges in Machine Learning for Health: Reflections from Research Roundtables at ML4H 2023 Symposium
Hyewon Jeong
Sarah Jabbour
Yuzhe Yang
Rahul Thapta
Hussein Mozannar
...
Linying Zhang
Harvineet Singh
Tom Hartvigsen
Helen Zhou
Chinasa T. Okolo
VLM
AI4TS
OOD
43
2
0
03 Mar 2024
Breaking the Black-Box: Confidence-Guided Model Inversion Attack for
  Distribution Shift
Breaking the Black-Box: Confidence-Guided Model Inversion Attack for Distribution Shift
Xinhao Liu
Yingzhao Jiang
Zetao Lin
32
0
0
28 Feb 2024
Bounding the Excess Risk for Linear Models Trained on
  Marginal-Preserving, Differentially-Private, Synthetic Data
Bounding the Excess Risk for Linear Models Trained on Marginal-Preserving, Differentially-Private, Synthetic Data
Yvonne Zhou
Mingyu Liang
Ivan Brugere
Dana Dachman-Soled
Danial Dervovic
Antigoni Polychroniadou
Min Wu
26
1
0
06 Feb 2024
Building Guardrails for Large Language Models
Building Guardrails for Large Language Models
Yizhen Dong
Ronghui Mu
Gao Jin
Yi Qi
Jinwei Hu
Xingyu Zhao
Jie Meng
Wenjie Ruan
Xiaowei Huang
OffRL
61
27
0
02 Feb 2024
BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic
  Architectures against Model Inversion Attacks
BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks
Hamed Poursiami
Ihsen Alouani
Maryam Parsa
AAML
40
3
0
01 Feb 2024
Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey
  and the Open Libraries Behind Them
Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them
Chao-Jung Liu
Boxi Chen
Wei Shao
Chris Zhang
Kelvin Wong
Yi Zhang
29
3
0
22 Jan 2024
Reconciling AI Performance and Data Reconstruction Resilience for
  Medical Imaging
Reconciling AI Performance and Data Reconstruction Resilience for Medical Imaging
Alexander Ziller
Tamara T. Mueller
Simon Stieger
Leonhard F. Feiner
Johannes Brandt
R. Braren
Daniel Rueckert
Georgios Kaissis
58
1
0
05 Dec 2023
CovarNav: Machine Unlearning via Model Inversion and Covariance
  Navigation
CovarNav: Machine Unlearning via Model Inversion and Covariance Navigation
Ali Abbasi
Chayne Thrash
Elaheh Akbari
Daniel Zhang
Soheil Kolouri
MU
32
3
0
21 Nov 2023
BrainWash: A Poisoning Attack to Forget in Continual Learning
BrainWash: A Poisoning Attack to Forget in Continual Learning
Ali Abbasi
Parsa Nooralinejad
Hamed Pirsiavash
Soheil Kolouri
CLL
KELM
AAML
34
5
0
20 Nov 2023
Label-Only Model Inversion Attacks via Knowledge Transfer
Label-Only Model Inversion Attacks via Knowledge Transfer
Ngoc-Bao Nguyen
Keshigeyan Chandrasegaran
Milad Abdollahzadeh
Ngai-man Cheung
45
13
0
30 Oct 2023
PrivacyGAN: robust generative image privacy
PrivacyGAN: robust generative image privacy
M. Zameshina
Marlene Careil
Olivier Teytaud
Laurent Najman
PICV
44
0
0
19 Oct 2023
When Machine Learning Models Leak: An Exploration of Synthetic Training
  Data
When Machine Learning Models Leak: An Exploration of Synthetic Training Data
Manel Slokom
Peter-Paul de Wolf
Martha Larson
MIACV
38
1
0
12 Oct 2023
Defending Our Privacy With Backdoors
Defending Our Privacy With Backdoors
Dominik Hintersdorf
Lukas Struppek
Daniel Neider
Kristian Kersting
SILM
AAML
26
2
0
12 Oct 2023
Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield
  but Also a Catalyst for Model Inversion Attacks
Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks
Lukas Struppek
Dominik Hintersdorf
Kristian Kersting
22
12
0
10 Oct 2023
Robust Representation Learning for Privacy-Preserving Machine Learning:
  A Multi-Objective Autoencoder Approach
Robust Representation Learning for Privacy-Preserving Machine Learning: A Multi-Objective Autoencoder Approach
Sofiane Ouaari
Ali Burak Ünal
Mete Akgun
Nícolas Pfeifer
32
0
0
08 Sep 2023
FusionAI: Decentralized Training and Deploying LLMs with Massive
  Consumer-Level GPUs
FusionAI: Decentralized Training and Deploying LLMs with Massive Consumer-Level GPUs
Zhenheng Tang
Yuxin Wang
Xin He
Longteng Zhang
Xinglin Pan
...
Rongfei Zeng
Kaiyong Zhao
S. Shi
Bingsheng He
Xiaowen Chu
41
30
0
03 Sep 2023
Model Inversion Attack via Dynamic Memory Learning
Model Inversion Attack via Dynamic Memory Learning
Gege Qi
YueFeng Chen
Xiaofeng Mao
Binyuan Hui
Xiaodan Li
Rong Zhang
Hui Xue
34
6
0
24 Aug 2023
Balancing Transparency and Risk: The Security and Privacy Risks of
  Open-Source Machine Learning Models
Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models
Dominik Hintersdorf
Lukas Struppek
Kristian Kersting
SILM
25
4
0
18 Aug 2023
Boosting Model Inversion Attacks with Adversarial Examples
Boosting Model Inversion Attacks with Adversarial Examples
Shuai Zhou
Tianqing Zhu
Dayong Ye
Xin Yu
Wanlei Zhou
AAML
MIACV
40
17
0
24 Jun 2023
Reinforcement Learning-Based Black-Box Model Inversion Attacks
Reinforcement Learning-Based Black-Box Model Inversion Attacks
Gyojin Han
Jaehyun Choi
Haeil Lee
Junmo Kim
MIACV
19
34
0
10 Apr 2023
Re-thinking Model Inversion Attacks Against Deep Neural Networks
Re-thinking Model Inversion Attacks Against Deep Neural Networks
Ngoc-Bao Nguyen
Keshigeyan Chandrasegaran
Milad Abdollahzadeh
Ngai-man Cheung
32
38
0
04 Apr 2023
Class Attribute Inference Attacks: Inferring Sensitive Class Information
  by Diffusion-Based Attribute Manipulations
Class Attribute Inference Attacks: Inferring Sensitive Class Information by Diffusion-Based Attribute Manipulations
Lukas Struppek
Dominik Hintersdorf
Felix Friedrich
Manuel Brack
P. Schramowski
Kristian Kersting
MIACV
33
2
0
16 Mar 2023
Decision-BADGE: Decision-based Adversarial Batch Attack with Directional
  Gradient Estimation
Decision-BADGE: Decision-based Adversarial Batch Attack with Directional Gradient Estimation
Geunhyeok Yu
Minwoo Jeon
Hyoseok Hwang
AAML
24
1
0
09 Mar 2023
Pseudo Label-Guided Model Inversion Attack via Conditional Generative
  Adversarial Network
Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network
Xiaojian Yuan
Kejiang Chen
Jie Zhang
Weiming Zhang
Neng H. Yu
Yangyi Zhang
26
34
0
20 Feb 2023
CodeLMSec Benchmark: Systematically Evaluating and Finding Security
  Vulnerabilities in Black-Box Code Language Models
CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language Models
Hossein Hajipour
Keno Hassler
Thorsten Holz
Lea Schonherr
Mario Fritz
ELM
40
20
0
08 Feb 2023
Private, fair and accurate: Training large-scale, privacy-preserving AI
  models in medical imaging
Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging
Soroosh Tayebi Arasteh
Alexander Ziller
Christiane Kuhl
Marcus R. Makowski
S. Nebelung
R. Braren
Daniel Rueckert
Daniel Truhn
Georgios Kaissis
MedIm
37
17
0
03 Feb 2023
Text Revealer: Private Text Reconstruction via Model Inversion Attacks
  against Transformers
Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers
Ruisi Zhang
Seira Hidano
F. Koushanfar
SILM
71
26
0
21 Sep 2022
Does CLIP Know My Face?
Does CLIP Know My Face?
Dominik Hintersdorf
Lukas Struppek
Manuel Brack
Felix Friedrich
P. Schramowski
Kristian Kersting
VLM
21
9
0
15 Sep 2022
Turning a Curse into a Blessing: Enabling In-Distribution-Data-Free
  Backdoor Removal via Stabilized Model Inversion
Turning a Curse into a Blessing: Enabling In-Distribution-Data-Free Backdoor Removal via Stabilized Model Inversion
Si-An Chen
Yi Zeng
J. T.Wang
Won Park
Xun Chen
Lingjuan Lyu
Zhuoqing Mao
R. Jia
15
3
0
14 Jun 2022
12
Next