## Local Asymptotic Normality

The concept of Local Asymptotic Normality (LAN) – introduced by Lucien LeCam – is one of the most important and fundamental ideas of the general asymptotic statistical theory. The LAN property is of particular importance in the asymptotic theory of testing, estimation and discriminant analysis. Many statistical models  have got likelihood ratios which are locally asymptotic normal  – that is the likelihood ratio processes of those models are asymptotically similar to those for the normal location parameter.

Let ${P_{0,n}}$ and ${P_{1,n}}$ be two sequences of probability measures on ${( \Omega_n, \mathcal{F}_n )}$. Suppose there is a sequence ${\mathcal{F}_{n,k}}$, ${k=1,...,k_n,}$ of sub ${\sigma}$-algebras of ${\mathcal{F}_n}$ s.th. ${\mathcal{F}_{n,k} \subset \mathcal{F}_{k+1}}$ and ${\mathcal{F}_{n,k_n} = \mathcal{F}_n}$. Let ${P_{i,n,k}}$ be the restriction of ${P_{i,n}}$ to ${\mathcal{F}_{n,k}}$ and let ${\gamma_{n,k}}$ be the Radon-Nikodym density taken on ${\mathcal{F}_{n,k}}$ of the part of ${P_{1,n,k}}$ that is dominated by ${P_{0,n,k}}$. Put

$\displaystyle Y_{n,k} = (\gamma_{n,k}/\gamma_{n,k-1})^{1/2} -1$

where ${\gamma_{n,0}=1}$ and ${n=1,2,...}$.

The logarithm of likelihood ratio

$\displaystyle \Lambda_n = \log \frac{dP_{1,n}}{dP_{0,n}}$

taken on ${\mathcal{F}_n}$ is then

$\displaystyle \Lambda_n = 2 \sum_k \log (Y_{n,k}+1)$

since ${ \log (\gamma_{n,k}/\gamma_{n,k-1}) = 2(Y_{n,k}+1) }$.

(LeCam 1986). Suppose that under ${P_{0,n}}$ the following conditions are satisfied

• L1: ${\max_k |Y_{n,k}| \xrightarrow{p} 0}$
• L2: ${\sum_{k}Y^2_{n,k} \xrightarrow{p} \tau^2/4 }$,
• L3: ${\sum_{k}E(Y^2_{n,k}+2Y_{n,k}| \mathcal{F}_{n,k-1}) \xrightarrow{p} 0}$, and
• L4: ${\sum_k E\{ Y^2_{n,k} \mathbb{I}(|Y_{n,k}|> \delta)| \mathcal{F}_{n,k-1} \} \xrightarrow{p} 0}$ for some ${\delta > 0}$. then

$\displaystyle \boxed{ \Lambda_n \xrightarrow{d} N(-\tau^2/2,\tau^2)}.$

Proof: Note that

$\displaystyle \begin{array}{rcl} \log(y+1) &=& 2 \left( \log(0+1) + (y-0)\frac{1}{0+1}- \frac{1}{2} (y-0)^2\frac{1}{0+1} +y^2R(y) \right) \\ &=& 2y-y^2+y^2R(y) \end{array}$

where ${R(y) \rightarrow 0}$ if ${y \rightarrow 0}$. Note that

$\displaystyle \left| \sum\limits_{k} Y^2_{n,k} R(Y_{n,k}) \right| \leq \left(\sum\limits_{k} Y^2_{n,k}\right) \underbrace{\max_{k} |R(Y_{n,k})|}_{ \text{0 by L1}} \xrightarrow{p} 0$

Hence

$\displaystyle \begin{array}{rcl} \Lambda_n &=& 2 \sum_k \log (Y_{n,k}+1) \\ &=& \sum_k (2 Y_{n,k} +Y_{n,k}^2) - 2 \sum_k Y_{n,k}^2 + o_{p_{0,n}}(1) \\ &=& \sum_k (2 Y_{n,k} +Y_{n,k}^2) - \underbrace{\frac{\tau^2}{2}}_{L2} + o_{p_{0,n}}(1) \end{array}$

For some ${\delta >0}$ define

$\displaystyle Z_{n,k} = Y_{n,k} \mathbb{I} (|Y_{n,k}|\leq \delta)$

hence using L1

$\displaystyle \sum_k Y_{n,k} - \sum_k Z_{n,k} = \sum_kY_{n,k} \mathbb{I} (|Y_{n,k}|> \delta) \xrightarrow{p} 0$

and

$\displaystyle \sum_k Y_{n,k}^2 - \sum_k Z_{n,k}^2 \xrightarrow{p} 0$

Since ${\forall \epsilon > 0}$

$\displaystyle \begin{array}{rcl} P(\max_k |Y_{n,k}| > \epsilon) &=& P \left\lbrace \sum_k |Y_{n,k}| \mathbb{I}(|Y_{n,k}|>\epsilon) > \epsilon \right\rbrace \\ &=& P \left\lbrace \sum_k Y_{n,k}^2 \mathbb{I}(|Y_{n,k}|>\epsilon) > \epsilon^2 \right\rbrace \end{array}$

Note also that

$\displaystyle \sum_k E\{Y^2_{n,k} \mathbb{I}(|Y_{n,k}|>\delta)| \mathcal{F}_{n,k-1}\} \geq \delta \sum_k E\{|Y_{n,k}| \mathbb{I}(|Y_{n,k}|>\delta)| \mathcal{F}_{n,k-1}\} \geq 0$

Hence using L3, L4

$\displaystyle \begin{array}{rcl} \sum_k E \{(Y^2_{n,k}+2Y_{n,k}) \mathbb{I}(|Y_{n,k}|\leq \delta)| \mathcal{F}_{n,k-1} \} &=& \underbrace{\sum_k E \{(Y^2_{n,k}+2Y_{n,k})| \mathcal{F}_{n,k-1} \}}_{\text{0 by L3}} \\ &+& \underbrace{\sum_k E \{(Y^2_{n,k}+2Y_{n,k}) \mathbb{I}(|Y_{n,k}|> \delta)| \mathcal{F}_{n,k-1} \}}_{\text{0 by L4}} \\ &\xrightarrow{p}& 0 \end{array}$

Thus ${\Lambda_n }$ can be expressed as

$\displaystyle \begin{array}{rcl} \Lambda_n &=& \sum_k (2 Z_{n,k} +Z_{n,k}^2) - \frac{\tau^2}{2} + o_{p_{0,n}}(1) \\ &=& \sum_k W_{n,k} + \sum_k E \{(Y^2_{n,k}+2Y_{n,k}) \mathbb{I}(|Y_{n,k}|\leq \delta)| \mathcal{F}_{n,k-1} \} - \frac{\tau^2}{2} + o_{p_{0,n}}(1) \\ &=& \sum_k W_{n,k} - \frac{\tau^2}{2} + o_{p_{0,n}}(1) \end{array}$

where

$\displaystyle W_{n,k} = 2Z_{n,k}+Z^2_{n,k} - E(2Z_{n,k}+Z^2_{n,k}|\mathcal{F}_{n,k-1})$

Since ${\max_k |Z_{n,k}| \xrightarrow{p} 0}$ we can use the dominated convergence theorem to have

$\displaystyle \begin{array}{rcl} E \left\lbrace max_k |E(2Z_{n,k}+Z_{n,k}^2 |\mathcal{F}_{n,k-1} )| \right\rbrace & \leq & E \left\lbrace E \left( 2 \max_k|Z_{n,k}| + \max_k Z_{n,k}^2 | \mathcal{F}_{n,k-1} \right) \right\rbrace \\ &=& E \left( 2 \max_k|Z_{n,k}| + \max_k Z_{n,k}^2 \right) \rightarrow 0 \end{array}$

which implies that

$\displaystyle \max_k |W_{n,k}| \xrightarrow{p} 0$

Note that ${\{W_{n,k}\}}$ is a martingale difference array and that

$\displaystyle \left| \sum\limits_{k} Z_{n,k}^3 \right| \leq \left( \max_k |Z_{n,k}| \right) \sum\limits_{k} Z_{n,k}^2 \xrightarrow{p} 0,$

$\displaystyle \left| \sum\limits_{k} Z_{n,k}^4 \right| \leq \left( \max_k |Z_{n,k}|^2 \right) \sum\limits_{k} Z_{n,k}^2 \xrightarrow{p} 0,$

and

$\displaystyle \sum\limits_{k} \left\lbrace E \left( 2 Z_{n,k} + Z_{n,k}^2 | \mathcal{F}_{n,k-1} \right) \right\rbrace^2 \xrightarrow{p} 0$

Hence

$\displaystyle \sum\limits_{k} W_{n,k}^2 = 4 \sum\limits_{k} Z_{n,k}^2 + o_{p_{0,n}}(1) \xrightarrow{p} \tau^2$

Note that ${ \max_k|\{W_{n,k}\}| }$ is i) uniformly bounded in ${L^2(\Omega)}$ norm, ii) ${\max_k|\{W_{n,k}\}| \xrightarrow{p} 0}$ and iii) ${\sum\limits_{k} W_{n,k}^2 \xrightarrow{p} \tau^2 }$.

Hence by applying McLeish’s central limit theorem we finally prove that

$\displaystyle \boxed{\Lambda_n \xrightarrow{d} N(-\tau^2/2, \tau^2)}$

$\Box$

References:

L. Le Cam, G.L. Yang (2000). Asymptotics in Statistics.  Springer-Verlag, New York
A.W. van der Vaart (2000). Asymptotic Statistics. Cambridge University Press
D. L. McLeish. Dependent Central Limit Theorems and Invariance Principles. Ann. Probab. Volume 2, Number 4 (1974), 620-628.
M. Taniguchi, Y. Kakizawa (1998). Asymptotic Theory of Statistical Inference for Time Series. Springer-Verlag, New York
(notes based mostly on the latter text)