Markov Chain theorems are concerned with what will happen in a long run.
A (discrete time, discrete space, time-homogeneous(transition probability will not change whatever time change, that is, not depend on n)) Markov Chain is specified by the following three ingredients:
- A state space S, any non-empty finite or countbale set
- initial probabilities {νi}i∈S, where νi is the probabiity of starting at state i at time 0. Thus νi≥0 and ∑iνi=1
- Transition Probabilities {pij}i,j∈S, where pij is the probability of jumping to state i to state j. Thus pij≥0 and ∑jpij=1 for all i.
We can write markov chain a sequence of random variables X0,X1,X2,… where X0 is the initial state and Xn is the state at time n.
- Initial Probabilities: νi=P(X0=i)∀i∈S
- the transition probabilities pij are basically conditional probability (conditional on starting). If P(Xn=i>0), then P(Xn+1=j∣Xn=i)=pij. pij does not depend on n. (again the time-homogeneous property)
- P(Xn+1=j∣Xn=in,Xn−1=in−1,…,X1=i1,X0=i0)=P(Xn+1=j∣Xn=in)=pinj for all n≥0 and j,in,in−1,…,i1,i0∈S (Markov Property). The equation basically says that the probability of the next state depends only on the current state, not on the past states. Also states that pij=P(Xn+1=j∣Xn=i).
- P(Xn=in,Xn−1=in−1,…,X1=i1,X0=i0)=νi0pi0i1pi1i2…pin−1in
- Denote μi(n)=P(Xn=i), then μi(n)=∑jμj(n−1)pji for all n≥1 and i∈S. μi(n) present the end of nth steps with state i. (e.g. P(X1=1)=μ1(1)=∑jνjpj1)
- μ(n)=νPn=μ(0)Pn
- pij(n)=P(Xn=j∣X0=i)=P(Xm+n=j∣Xm=i)=Pi(Xn=j)
- e.g. pii(2)=∑kpikpki then we have pii(n)=∑kpik(n−1)pki similarly pij(n)=∑kpik(n−1)pkj
- Pi(⋯)=P(⋯∣X0=i)
- Ei(⋯)=E(⋯∣X0=i)
- pij(n) basically means the probability of being at state j after n steps, given that you start at state i.
The best way to get such pij(n) is to calculate by P(n) where P is the transition matrix.
CHAPMAN-KOLMOGOROV EQUATION: pij(n+m)=∑kpik(n)pkj(m)
CHAPMAN-KOLMOGOROV INEQUALITY: