Karl Popper: Conjectures and Refutations

(1) It is easy to obtain confirmations, or verifications, for nearly every theory-if we look for confirmations.

(2) Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory–an event which would have refuted the theory.

(3) Every ‘good’ scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.

(4) A theory which is not refutable by any conceivable event is nonscientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.

(5) Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.

(6) Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of ‘corroborating evidence’.)

(7) Some genuinely testable theories, when found to be false, are still upheld by their admirers–for example by introducing ad hoc some auxiliary assumption, or by re-interpreting theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, it rescues the theory from refutation only at the price of destroying, or at least lowering, scientific status.

—————————-

Excerpt from a lecture given by Karl Popper at Peterhouse, Cambridge, in Summer 1953, as part of a course on Developments and trends in contemporary British philosophy.

Posted in Probability | Tagged , , | Leave a comment

Some notes on Kalman Filtering

State Space form

Measurement Equation

\displaystyle \boxed{\mathbf{\underbrace{y_{t}}_{N \times 1}=\underbrace{Z_{t}}_{N \times m}\underbrace{a_{t}}_{m \times 1}+d_{t}+\varepsilon_{t}}}

\displaystyle Var(\varepsilon_{t})= \mathbf{H_{t}}

Transition Equation

\displaystyle \boxed{\mathbf{\underbrace{a_{t}}_{m \times 1} =\underbrace{T_{t}}_{m \times m} a_{t-1}+c_{t}+\underbrace{R_{t}}_{m \times g} \underbrace{\eta_{t}}_{g \times 1}}}

\displaystyle Var(\eta_{t})=\mathbf{Q}_{t}

\displaystyle E(a_{0})= \mathbf{a_{0} \; \; \; \; Var(a_{0})=P_{0}} \; \; \; \; E(\varepsilon_{t}a_{0}^{\top}) \; \; \; E(\eta_{t}a_{0}^{\top})

Future form

\displaystyle \mathbf{a_{t+1}=T_{t}a_{t}+c_{t}+R_{t}\eta_{t}}

Continue reading

Posted in Statistics, Time Series | Tagged , , , | Leave a comment

Expectation: Useful properties and inequalities

Screenshot - 110614 - 16:15:09

If {X \geq 0} is a random variable on {(\Omega, \mathcal{F}, P)}. The expected value of {X } is defined as

\displaystyle \mathbb{E}(X) \equiv \int_{\Omega} X dP = \int_{\Omega} X(\omega) P (d \omega)

Inequalities

  • Jensen’s inequality. If {\varphi} is convex and {E|X|, E|\varphi(X)| < \infty}

\displaystyle \mathbb{E} (\varphi(X)) \geq \varphi(\mathbb{E}X)

  • Holder’s inequality. If {p,q \in [1, \infty]} with {1/p + 1/q =1} then

\displaystyle \mathbb{E}|XY| \leq \|X\|_p \|Y\|_q

  • Cauchy-Schwarz Inequality: For {p=q=2}

\displaystyle \mathbb{E}|XY| \leq \left( \mathbb{E}(X^2) \mathbb{E}(Y^2) \right)^{1/2}

Continue reading

Posted in Probability | Tagged , , | Leave a comment

A nice chart of univariate distribution relationships

distributions

Continue reading

Posted in Probability | Tagged , | Leave a comment

Big Data for Volatility vs.Trend

So different aspects of Big Data — in this case dense vs. tall — are of different value for different things.  Dense data promote accurate volatility estimation, and tall data promote accurate trend estimation.

More (No Hesitations blog)

Posted in Statistics | Tagged , , | Leave a comment

The limitations of randomised controlled trials

Posted in Statistics, Uncategorized | Tagged , , | Leave a comment

Very brief notes on measures: From σ-fields to Carathéodory’s Theorem

Definition 1. A {\sigma}-field {\mathcal{F}} is a non-empty collection of subsets of the sample space {\Omega} closed under the formation of complements and countable unions (or equivalently of countable intesections – note {\bigcap_{i} A_i = (\bigcup_i A_i^c)^c}). Hence {\mathcal{F}} is a {\sigma}-field if

1. {A^c \in \mathcal{F}} whenever {A \in \mathcal{F}}
2. {\bigcup_{i=1}^{\infty} A_i \in \mathcal{F}} whenever {A_i \in \mathcal{F}, n \geq 1}

Definition 2. Set functions and measures. Let {S} be a set and {\Sigma_0} be an algebra on {S}, and let {\mu_0} be a non-negative set function

\displaystyle \mu_0: \Sigma_0 \rightarrow [0, \infty]

  • {\mu_0} is additive if {\mu_0 (\varnothing) =0} and, for {F,G \in \Sigma_0},

    \displaystyle F \cap G = \varnothing \qquad \Rightarrow \qquad \mu_0(F \cup G ) = \mu_0(F) + \mu_0(G)

  • The map {\mu_0} is called countably additive (or {\sigma}-additive) if {\mu (\varnothing)=0} and whenever {(F_n: n \in \mathbb{N})} is a sequence of disjoint sets in {\Sigma_0} with union {F = \cup F_n} in {\Sigma_0}, then

    \displaystyle \mu_0 (F) = \sum_{n}\mu_0 (F_n)

  • Let {(S, \Sigma)} be a measurable space, so that {\Sigma} is a {\sigma}-algebra on {S}.
  • A map \displaystyle \mu: \Sigma \rightarrow [0,\infty]. is called a measure on {(S, \Sigma)} if {\mu} is countable additive. The triple {(S, \Sigma, \mu)} is called a measure space.
  • The measure {\mu} is called finite if

    \displaystyle \mu(S) < \infty,

    and {\sigma}finite if

    {\exists \{S_n\} \in \Sigma}, ({n \in \mathbb{N}}) s.th.\displaystyle \mu(S_n)< \infty, \forall n \in \mathbb{N} \text{ and } \cup S_n = S.

  • Measure {\mu} is called a probability measure if \displaystyle \mu(S) = 1, and {(S, \Sigma, \mu)} is then called a probability triple.
  • An element {F} of {\Sigma} is called {\mu}-null if {\mu(F)=0}.
  • A statement {\mathcal{S}} about points {s} of {\mathcal{S}} is said to hold almost everywhere (a.s.) if

    \displaystyle F \equiv \{ s: \mathcal{S}(s) \text{ is false} \} \in \Sigma \text{ and } \mu(F)=0.

Continue reading

Posted in Probability | Tagged , , | Leave a comment