Almost sure convergence via pairwise independence

If {A_1,A_2,...} are pairwise independent and {\sum_{n=1}^{\infty}P(A_n)=\infty} then as {n \rightarrow \infty}

\displaystyle \boxed{ \frac{\sum_{m=1}^{n}\mathbb{I}_{A_m}}{\sum_{m=1}^{n}P(A_m)} \xrightarrow{a.s.} 1 }

Proof:

Let {X_m = \mathbb{I}_{A_m}} and {S_n = X_1+...+Xn}. Since {A_m} are pairwise independent, the {X_m} are uncorrelated and thus

\displaystyle var(S_n) = var(X_1) + ... + var(X_n)

Since {X_m \in \{0,1 \}}

\displaystyle var(X_m) \leq \mathbb{E}[X_m^2] = \mathbb{E}[X_m] \Rightarrow var(S_n) \leq \mathbb{E} [S_n]

Continue reading “Almost sure convergence via pairwise independence”

Advertisements

Karl Popper: Conjectures and Refutations

(1) It is easy to obtain confirmations, or verifications, for nearly every theory-if we look for confirmations.

(2) Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory–an event which would have refuted the theory.

(3) Every ‘good’ scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.

(4) A theory which is not refutable by any conceivable event is nonscientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.

(5) Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.

(6) Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of ‘corroborating evidence’.)

(7) Some genuinely testable theories, when found to be false, are still upheld by their admirers–for example by introducing ad hoc some auxiliary assumption, or by re-interpreting theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, it rescues the theory from refutation only at the price of destroying, or at least lowering, scientific status.

—————————-

Excerpt from a lecture given by Karl Popper at Peterhouse, Cambridge, in Summer 1953, as part of a course on Developments and trends in contemporary British philosophy.

Very brief notes on measures: From σ-fields to Carathéodory’s Theorem

Definition 1. A {\sigma}-field {\mathcal{F}} is a non-empty collection of subsets of the sample space {\Omega} closed under the formation of complements and countable unions (or equivalently of countable intesections – note {\bigcap_{i} A_i = (\bigcup_i A_i^c)^c}). Hence {\mathcal{F}} is a {\sigma}-field if

1. {A^c \in \mathcal{F}} whenever {A \in \mathcal{F}}
2. {\bigcup_{i=1}^{\infty} A_i \in \mathcal{F}} whenever {A_i \in \mathcal{F}, n \geq 1}

Definition 2. Set functions and measures. Let {S} be a set and {\Sigma_0} be an algebra on {S}, and let {\mu_0} be a non-negative set function

\displaystyle \mu_0: \Sigma_0 \rightarrow [0, \infty]

  • {\mu_0} is additive if {\mu_0 (\varnothing) =0} and, for {F,G \in \Sigma_0},

    \displaystyle F \cap G = \varnothing \qquad \Rightarrow \qquad \mu_0(F \cup G ) = \mu_0(F) + \mu_0(G)

  • The map {\mu_0} is called countably additive (or {\sigma}-additive) if {\mu (\varnothing)=0} and whenever {(F_n: n \in \mathbb{N})} is a sequence of disjoint sets in {\Sigma_0} with union {F = \cup F_n} in {\Sigma_0}, then

    \displaystyle \mu_0 (F) = \sum_{n}\mu_0 (F_n)

  • Let {(S, \Sigma)} be a measurable space, so that {\Sigma} is a {\sigma}-algebra on {S}.
  • A map \displaystyle \mu: \Sigma \rightarrow [0,\infty]. is called a measure on {(S, \Sigma)} if {\mu} is countable additive. The triple {(S, \Sigma, \mu)} is called a measure space.
  • The measure {\mu} is called finite if

    \displaystyle \mu(S) < \infty,

    and {\sigma}finite if

    {\exists \{S_n\} \in \Sigma}, ({n \in \mathbb{N}}) s.th.\displaystyle \mu(S_n)< \infty, \forall n \in \mathbb{N} \text{ and } \cup S_n = S.

  • Measure {\mu} is called a probability measure if \displaystyle \mu(S) = 1, and {(S, \Sigma, \mu)} is then called a probability triple.
  • An element {F} of {\Sigma} is called {\mu}-null if {\mu(F)=0}.
  • A statement {\mathcal{S}} about points {s} of {\mathcal{S}} is said to hold almost everywhere (a.s.) if

    \displaystyle F \equiv \{ s: \mathcal{S}(s) \text{ is false} \} \in \Sigma \text{ and } \mu(F)=0.

Continue reading “Very brief notes on measures: From σ-fields to Carathéodory’s Theorem”