Random Variables Convergence
From the analysis courses, we somehow learn the convergence of sequence, the convergence of series, etc. Probability as the branch of matrics is also a branch of analysis. So, we can also talk about the convergence of probability.
Convergence in Probability
Let X1,X2,… be an infinite sequence of random variables and let Y be another random variable. Then the sequence {Xn} converges in probability to Y if ∀ϵ>0,n→∞limP(∣Xn−Y∣>ϵ)=0. We denote it as Xn→PY
Weak Law of Large Number: Let X1,X2,… be a sequence of independent random variables, each having same mean μ. And thei variance σ2≤v where v<∞. Then ∀ϵ>0,limn→∞P(∣Mn−μ∣>ϵ)=0 or we say Mn→pμ where Mn=n∑inXi
- Can be prove by Chebychev's Inequality P(∣Mn−μ∣≥ϵ)≤ϵ2Var[Mn]≤nϵ2ν and n→∞limν/(nϵ2)=0
- where E[Mn]=μ,Var[Mn]=1/n2∑iVar[Xi]≤nν/n2=ν/n
Almost Sure Convergence
Let X1,X2,… be a sequence of random variables. Let X be another random variable. the sequence {Xn} is almost surely to X if P({s:Xn(s)→PX(s)})=1. We denote it as Xn→a.s.X. Sometimes we also call it Convergence with probability 1.
- P(n→∞limXn=X)=1⟹P(n→∞lim∣Xn−X∣>ϵ i.o)=0
- Xn→a.s.X⟹Xn→PX no converse
- Almost surely not says n→∞limE[Xn]=E[X], but E[n→∞limXn]=E[X].
E.g. S=[0,1],U∼Uniform(0,1), let Xn(s)=I(U>n21) then Xn→a.s.1 by Bonel Canelli Lemma.
- Let ϵ>0, ∃X,s.t.,∑n=1∞P(∣Xn−X∣>ϵ)<∞⟹Xn→a.s.X. Assume X=1. Then we have ∑n=1∞P(∣Xn−1∣>ϵ)=∑n=1∞P(Xn−1>ϵ)+P(−Xn−1>ϵ)=∑n=1∞P(1−Xn>ϵ)=∑n=1∞P(Xn=0)=∑n=1∞P(U≤n21)=∑n=1∞n21=6π2<∞ which Xn→a.s.1 by Bonel Canelli Lemma.
Strong Law of Large Number: Let X1,X2,… be a sequence of indpendent random variables. Then ∀ϵ>0,P(limn→∞Mn=μ)=1 or we say Mn→a.s.μ
BOUNDED CONVERGENCE THEOREM: If Xn→a.s.X and Xn uniformly bounded (∃M<∞,∣Xn∣≤M,∀n), then n→∞limE[Xn]=E[X].
MONOTONE CONVERGENCE THEOREM: Xn→a.s.X and 0≤X1≤X2≤…⟹n→∞limE[Xn]=E[X].
DOMINATED CONVERGENCE THEOREM: Xn→a.s.X and there is another random variable Y with E[∣Y∣]<∞ and ∣Xn∣≤Y,∀n⟹n→∞limE[Xn]=E[X].
Convergence in Distribution
Let X1,X2,… be a sequence of random variables. Let X be another random variable. Then the sequence {Xn} converges in distribution to X if ∀x∈R,P(X=x)=0⟹limn→∞P(Xn≤x)=P(X≤x). We denote it as Xn→DX (continuous measure = 0)
- Xn→PX⟹Xn→DX (converse is false)
- Xn→a.s.X⟹Xn→DX (converse is false)
Central Limit Theorem (CLT): Let X1,X2