LAN for Linear Processes

Consider a m-vector linear process

\displaystyle \mathbf{X}(t) = \sum\limits_{j=0}^{\infty} A_{\theta}(j)\mathbf{U}(t-j), \qquad t \in \mathbb{Z}

where {\mathbf{U}(t)} are i.i.d. m-vector random variables with p.d.f. {p(\mathbf{u})>0} on {\mathbf{R}^m}, {A_{\theta} (j)} are {m \times m} matrices depending on a parameter vector { \mathbf{\theta} = (\theta_1,...,\theta_q) \in \Theta \subset \mathbf{R}^q}.


\displaystyle A_{\theta}(z) = \sum\limits_{j=0}^{\infty} A_{\theta}(j)z^j, \qquad |z| \leq 1.

Assume the following conditions are satisfied

A1 i) For some {D} {(0<D<1/2)}

\displaystyle \pmb{|} A_{\theta}(j) \pmb{|} = O(j^{-1+D}), \qquad j \in \mathbb{N},

where { \pmb{|} A_{\theta}(j) \pmb{|}} denotes the sum of the absolute values of the entries of { A_{\theta}(j)}.

ii) Every { A_{\theta}(j)} is continuously two times differentiable with respect to {\theta}, and the derivatives satisfy

\displaystyle |\partial_{i_1} \partial_{i_2}... \partial_{i_k} A_{\theta, ab}(j)| = O \{j^{-1+D}(logj)^k\}, \qquad k=0,1,2

for {a,b=1,...,m,} where {\partial_i = \partial/ \partial\theta_i}.

iii) {det A_{\theta}(z) \neq 0} for {|z| \leq 1} and {A_{\theta}(z)^{-1}} can be expanded as follows:

\displaystyle A_{\theta}(z)^{-1} = I_m + B_{\theta}(1)z + B_{\theta}(2)z^2 + ...,

where { B_{\theta}(j)}, {j=1,2,...,} satisfy

\displaystyle \pmb{|} B_{\theta}(j) \pmb{|} = O(j^{-1-D}).

iv) Every { B_{\theta}(j)} is continuously two times differentiable with respect to {\theta}, and the derivatives satisfy

\displaystyle |\partial_{i_1} \partial_{i_2}... \partial_{i_k} B_{\theta, ab}(j)| = O \{j^{-1+D}(logj)^k\}, \qquad k=0,1,2

for {a,b=1,...,m.}

A2 {p(.)} satisfies

\displaystyle \lim\limits_{\| \mathbf{u} \| \rightarrow \infty} p(\mathbf{u})=0, \qquad \int \mathbf{u} p(\mathbf{u}) d \mathbf{u} =0, \qquad \text{and} \qquad \int \mathbf{uu'}p(\mathbf{u}) d \mathbf{u}=I_m

A3 The continuous derivative {Dp} of {p(.)} exists on {\mathbf{R}^m}.


\displaystyle \int \pmb{|} \phi(\mathbf{u}) \pmb{|}^4 p (\mathbf{u}) d \mathbf{u} < \infty,
where {\phi(\mathbf{u}) = p^{-1}Dp}.

From A1 the linear process can be expressed as

\displaystyle \sum\limits_{j=0}^{\infty} B_{\theta}(j) \mathbf{X}(t-j) = \mathbf{U}(t), \qquad B_{\theta} (0) = I_m
and hence

\displaystyle \mathbf{U}(t) = \sum\limits_{j=0}^{t-1}B_{\theta}(j)\mathbf{X}(t-j)+\sum\limits_{r=0}^{\infty}C_{\theta}(r,t)\mathbf{U}(-r),


\displaystyle C_{\theta}(r,t)= \sum\limits_{r'=0}^{r}B_{\theta}(r'+t)A_{\theta}(r-r').

From A1 it can be seen that

\displaystyle C_{\theta}(r,t) = O(t^{-D/2}) O(r^{-1+D}).

Let {Q_{n,\theta} } and {Q_{\mathbf{u}}} be the probability distributions of {(\mathbf{U}(s), s \leq 0, \mathbf{X}(1),...,\mathbf{X}(n))} and {(\mathbf{U}(s), s \leq 0)}, respectively. Then

\displaystyle d Q_{n,\theta} = \prod\limits_{t=1}^{n} p \left\lbrace \sum\limits_{t=1}^{n} B_{\theta}(j)\mathbf{X}(t-j) + \sum\limits_{r=0}^{\infty}C_{\theta}(r,t)\mathbf{U}(-r) \right\rbrace dQ_{\mathbf{u}}.

For two different values { \theta, \theta' \in \Theta}, the likelihood-ration is

\displaystyle \Lambda_n(\theta, \theta') \equiv \log \frac{d Q_{n,\theta'}}{d Q_{n,\theta}} = 2 \sum\limits_{k=1}^{n} \log \Phi_{n,k} (\theta, \theta')

\displaystyle \begin{array}{rcl} \Phi^2_{n,k}= \frac{1}{p(\mathbf{U}(k))}p \left\lbrace \mathbf{U}(k) + \sum\limits_{j=0}^{k-1}(B_{\theta'}-B_{\theta}) \mathbf{X}(k-j) + \sum\limits_{r=0}^{\infty}(C_{\theta'}(r,t)-C_{\theta}(r,t))\mathbf{U}(-r) \right\rbrace \end{array}


\displaystyle \theta_n = \theta + \frac{1}{\sqrt{n}}h, \qquad h=(h_1,...,h_q)' \in \mathcal{H} \subset \mathbb{R}^q,

\displaystyle \mathcal{F}(p) = \mathbf{E} \left[ \frac{\partial \log p(U)}{\partial \theta} \frac{\partial \log p(U)}{\partial \theta'} \right] = \int \phi(\mathbf{u}) \phi'(\mathbf{u}) p (\mathbf{u}) d \mathbf{u} \qquad \text{(Fisher information)}


\displaystyle R(t) = \mathbf{E} \left[ \mathbf{X}(s) \mathbf{X}(t+s)' \right], \qquad t \in \mathbb{Z}.

Then the following theorem follows


The sequence of experiments

\displaystyle \mathcal{E}_n = \left\lbrace \mathbb{R}^{\mathbb{Z}}, \mathcal{B}^{\mathbb{Z}}, \left\lbrace Q_{n,\theta}: \theta \in \Theta \subset \mathbb{R}^q \right\rbrace \right\rbrace, \qquad n \in \mathbb{N},

is asymptotically equicontinuous on compact subset {C} of {\mathcal{H}}. That is

(i) {\forall \theta \in \Theta}, the log-likelihood ratio {\Lambda_n(\theta, \theta_n)} admits, under the hypothesis {H(p;\theta) } (i.e. {U(t) \sim p(.; \theta)}), as {n \rightarrow \infty}, the asymptotic representation

\displaystyle \Lambda_n (\theta, \theta_n) = \Delta_n(h;\theta) - \frac{1}{2} \Gamma_h(\theta) + o_p(1),

\displaystyle \Delta_n(h;\theta) = \frac{1}{\sqrt{n}} \sum\limits_{k=1}^{n} \phi(\mathbf{U}(k))' \sum\limits_{k=1}^{k-1}B_{h' \partial \theta}(j) \mathbf{X}(k-j)


\displaystyle \Gamma_h(\theta) = tr \left\lbrace \mathcal{F}(p) \sum\limits_{j_1=1}^{\infty} \sum\limits_{j_2=1}^{\infty} B_{h' \partial \theta}(j_1) R(j_1-j_2) B_{h' \partial \theta}(j_1)' \right\rbrace


\displaystyle B_{h' \partial \theta}(j) = \sum\limits_{\ell=1}^{q} h_{\ell} \partial_{\ell} B_{\theta} (j).

(note that by Taylor expansion { \log \frac{Q_{\theta+h}}{Q_{\theta}}(x) = h \dot{\ell}_{\theta}(x)+ \frac{1}{2}h^2 \ddot{\ell}_{\theta}(x) + o_x(h^2).})

(ii) Under {H(p;\theta)},

\displaystyle \Delta_n (h;\theta) \xrightarrow{d} N(0,\Gamma_h(\theta))

(iii) {\forall n \in \mathbb{N}} and all {h \in \mathcal{H}}, the mapping

\displaystyle h \rightarrow Q_{n, \theta_n}

is continuous w.r.t. the variational distance

\displaystyle ||P-Q|| = \sup \left\lbrace |P(A)-Q(A)|: A \in \mathcal{B}^{ \mathbb{Z}}\right\rbrace

A proof of the theorem will follow in a future post.



L. Le Cam, G.L. Yang (2000). Asymptotics in Statistics.  Springer-Verlag, New York
B. Garel and M. Hallin (1995). Local asymptotic normality of multivariate ARMA processes with linear trend. Ann. Inst. Statist. Math. 47 551–579.
J.P. Kreiss (1990b). Local asymptotic normality for autoregression with infinite order. J. Statist. Plann. Inference 26 185–219.
A.R. Swensen (1985). The asymptotic distribution of the likelihood ratio for autoregressive time series with a regression trend, J.Multivariate Anal., 16, 54-70.
A.W. van der Vaart (2000). Asymptotic Statistics. Cambridge University Press
M. Taniguchi, Y. Kakizawa (1998). Asymptotic Theory of Statistical Inference for Time Series. Springer-Verlag, New York
(main source)
This entry was posted in Statistics and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s