Skip to main content

Martingales

Recall the Martingales we learn in STA 347. In this section, we will review the definition of a martingale and some of its properties.

Introduction and Definition

Martingales are a model for stochastic processes which “stay the same on average” and a model of fair game.

That is, a sequence XnX_n with finite expectation is a martingale if E[Xn+1X0,X1,,Xn]=XnE[X_{n+1} | X_0, X_1, \ldots, X_n] = X_n for all n0n \geq 0 or equivalently, jSjpij=i,iS\sum_{j\in S} jp_{ij} = i, \forall i \in S.

Properties

Let XnX_n be a martingale,

Double-Expectation Formula: E[Xn]=E[XnX0,,Xn1]=E[Xn1]E[X_n] = E[X_n|X_0, \ldots,X_{n-1}] = E[X_{n-1}]. By Double-Expectation Formula, we have E[Xn]=E[X0]nE[X_n] = E[X_0] \forall n. But E[Xn]=E[X0]nE[X_n] = E[X_0] \forall n is not a sufficient condition for XnX_n to be a martingale. For example:

Given a simple symmetric random walk with X0=0X_0 = 0, then we have E[Xn+1X0,,Xn]=XnE[X_{n+1}|X_0, \ldots, X_n] = X_n, but let some time T=inf{n:Xn0}T = \inf\{n: X_n \ne 0\}, then E[XT]=sSI[XT=s]XT×P(XT=s)0=E[X0]E[X_T] = \sum_{s\in S} I[X_T = s] X_T \times P(X_T = s) \ne 0 = E[X_0].

Stopping Time

Since we wanna discuss if there exists TT might depend on the preocess self, E[XT]=E[X0]E[X_T] = E[X_0] still holds or not. That is, we define a non-negative integer valued random variable TT is a stopping time for XnX_n if the event {T=n}\{T = n\} is determined by X0,X1,,XnX_0, X_1, \ldots, X_n (i.e. IT=nI_{T = n} is a function of X0,X1,,XnX_0, X_1, \ldots, X_n).

  • P(T=)>0    XTP(T = \infty) > 0 \implies X_T is not always defined, so we always assume P(T<)=1P(T <\infty) = 1.

Optional Stopping Time

The example above about random walk shows TT is a stopping time for XnX_n. What conditions should TT satisfy to still satisfy E[XT]=E[X0]E[X_T] = E[X_0]? That is, we have Optional Stopping theorem: If XnX_n is a martingale and M<,P(TM)=1,E[XT]<,limnE[XnIT>n]=0\exists M < \infty, P(T \le M) = 1, E[|X_T|] < \infty, \lim_{n\to\infty} E[X_n I_{T > n}] = 0, then E[XT]=E[X0]E[X_T] = E[X_0].

Also, If XnX_n is a martingale and M<,P(XnInTM)=1P(T)=1\exists M < \infty, P(|X_n| I_{n \le T} \le M) = 1 \land P(T \le \infty) = 1, then E[XT]=E[X0]E[X_T] = E[X_0].

Martingale Convergence Theorem

Any martingale XnX_n s.t. XncXncX_n \ge c \lor X_n \le c for some cRc \in \mathbb{R} converges w.p. 1 to some random variable XX.

Branching Process

A branching process is a stochastic model for population growth. Consider a population of individuals. Let XnX_n denote the number of individuals at time nn.

We make two assumptions about the reproduction process:

  1. Each individual produces offspring with the same probability distribution (i.e. i=0=1\sum_{i = 0} ^{\infty} = 1) pkp_k is the probability that an individual produces exactly kk offspring.
  2. The individuals reproduce independently.

Furthermore, if Xn=kX_n = k, then we can denote Y1,,YkY_1, \ldots, Y_k are independent random variables with the same distribution P(Yi=j)=pjP(Y_i = j) = p_j with mean μ\mu. Then:

  • pkj=P(Xn+1=jXn=k)=P(Y1++Yk=j)p_{kj} = P(X_{n+1} = j | X_n = k) = P(Y_1 + \cdots + Y_k = j).
  • E[Xn+1Xn=k]=E[Y1++Yk]=kμE[X_{n+1} | X_n = k] = E[Y_1 + \cdots + Y_k] = k\mu.
  • Further, E[Xn]=k=0P(Xn1=k)E[XnXn1=k]=k=0kμP(Xn1=k)=μE[Xn1]=μnE[X0]E[X_n] = \sum_{k = 0} ^{\infty} P(X_{n-1} = k) E[X_n|X_{n-1} = k] = \sum_{k = 0} ^{\infty} k\mu P(X_{n-1} = k) = \mu E[X_{n-1}] = \mu^n E[X_0].

From above, we can see that XnX_n depend on the μ\mu. μ=1    E[Xn]=E[X0]\mu = 1 \implies E[X_n] = E[X_0] which shows XnX_n is a martingale. μ<1    limnE[Xn]=limnμnE[X0]=0\mu < 1 \implies \lim_{n\to \infty} E[X_n] = \lim_{n\to \infty} \mu^n E[X_0] = 0 that is limnP(Xn=0)=1\lim_{n\to\infty} P(X_n = 0) = 1. But we have no enough information if μ>1\mu > 1 (the flourishing situation).

Let an(k)=P(Xn=0X0=k)a_n(k) = P(X_n = 0 | X_0 = k) be the probability that the population eventually dies out at time nn given that it started with kk individuals. We denote a(k)=limnan(k)a(k) = \lim_{n\to\infty} a_n(k). Since the population dies out if for all k branches die out, we have a(k)=[a(1)]ka(k) = [a(1)]^k. We denote a(1)=aa(1) = a called the extinction probability.

Since a=P(population dies outX0=1)=k=0P(X1=kX0=1)P(population dies outX1=k)a = P(\text{population dies out}|X_0 = 1) = \sum_{k=0}^{\infty} P(X_1 = k|X_0 = 1) P(\text{population dies out}|X_1 = k), continued that, then we eventually have k=0pka(k)=k=0pkak\sum_{k=0}^{\infty} p_k a(k) = \sum_{k=0}^{\infty} p_k a^k. To solve this aa, we would like to use pgf where a=ϕ(a)a = \phi(a).

Further detailed, we denote ϕn(a)\phi_n(a) be the pgf of XnX_n, we can show that ϕn(a)=ϕ(ϕn1(a))\phi_n(a) = \phi(\phi_{n - 1}(a)). That is, an(1)=ϕn(0)a_n(1) = \phi_n(0), aa is the smallest positive root of the equation of a=ϕ(a)a = \phi(a) (a=1a=1 must satisfy this equation). The pgf here is increasing. Thus, we can conclude the following theorem:

If μ1,p0>0\mu \le 1, p_0 > 0, then a=1a = 1. If μ>1\mu > 1, then a<1a < 1 and equals the unique root of the equation a=ϕ(a)a = \phi(a) with 0<a<10 < a < 1.