Parameter estimation is a fundamental challenge in machine learning, crucial for tasks such as neural network weight fitting and Bayesian inference. This paper focuses on the complexity of estimating translation and shrinkage parameters for a distribution of the form , where is a known density in given samples. We highlight that while the problem is NP-hard for Maximum Likelihood Estimation (MLE), it is possible to obtain -approximations for arbitrary within time using the Wasserstein distance.
View on arXiv