26
6
v1v2 (latest)

Adaptive exponential power distribution with moving estimator for nonstationary time series

Abstract

While standard estimation assumes that all datapoints are from probability distribution of the same fixed parameters θ\theta, we will focus on maximum likelihood (ML) adaptive estimation for nonstationary time series: separately estimating parameters θT\theta_T for each time TT based on the earlier values (xt)t<T(x_t)_{t<T} using (exponential) moving ML estimator θT=argmaxθlT\theta_T=\arg\max_\theta l_T for lT=t<TηTtln(ρθ(xt))l_T=\sum_{t<T} \eta^{T-t} \ln(\rho_\theta (x_t)) and some η(0,1]\eta\in(0,1]. Computational cost of such moving estimator is generally much higher as we need to optimize log-likelihood multiple times, however, in many cases it can be made inexpensive thanks to dependencies. We focus on such example: ρ(x)exp((xμ)/σκ/κ)\rho(x)\propto \exp(-|(x-\mu)/\sigma|^\kappa/\kappa) exponential power distribution (EPD) family, which covers wide range of tail behavior like Gaussian (κ=2\kappa=2) or Laplace (κ=1\kappa=1) distribution. It is also convenient for such adaptive estimation of scale parameter σ\sigma as its standard ML estimation is σκ\sigma^\kappa being average xμκ\|x-\mu\|^\kappa. By just replacing average with exponential moving average: (σT+1)κ=η(σT)κ+(1η)xTμκ(\sigma_{T+1})^\kappa=\eta(\sigma_T)^\kappa +(1-\eta)|x_T-\mu|^\kappa we can inexpensively make it adaptive. It is tested on daily log-return series for DJIA companies, leading to essentially better log-likelihoods than standard (static) estimation, with optimal κ\kappa tails types varying between companies. Presented general alternative estimation philosophy provides tools which might be useful for building better models for analysis of nonstationary time-series.

View on arXiv
Comments on this paper