A functional large and moderate deviation principle for infinitely divisible processes driven by null-recurrent markov chains

Suppose is a space with a null-recurrent Markov kernel . Furthermore, suppose there are infinite particles with variable weights on performing a random walk following . Let be a weighted functional of the position of particles at time . Under some conditions on the initial distribution of the particles the process is stationary over time. Non-Gaussian infinitely divisible (ID) distributions turn out to be natural candidates for the initial distribution and then the process is ID. We prove a functional large and moderate deviation principle for the partial sums of the process . The recurrence of the Markov Kernel induces long memory in the process and that is reflected in the large deviation principle. It has been observed in certain short memory processes that the large deviation principle is very similar to that of an i.i.d. sequence. Whereas, if the process is long range dependent the large deviations change dramatically. We show that a similar phenomenon is observed for infinitely divisible processes driven by Markov chains. Processes of the form of gives us a rich class of non-Gaussian long memory models which may be useful in practice.
View on arXiv