Review of Convergence of Random Sequences

Definitions, Propositions and Theorems

Characteristic Function

The characteristic function for a random variable X X X is
ϕ X ( t ) = E [ e i t X ] = E [ cos ⁡ ( t X ) + i sin ⁡ ( t X ) ] . \phi_X(t)=E[e^{itX}]=E[\cos(tX)+i\sin(tX)]. ϕX(t)=E[eitX]=E[cos(tX)+isin(tX)].

Fourier Inversion Formula

F ′ ( x ) = 1 2 π ∫ − ∞ + ∞ e − i t x ϕ X ( t ) d t . F'(x)=\frac{1}{2\pi}\int_{-\infty}^{+\infty}e^{-itx}\phi_X(t)dt. F(x)=2π1+eitxϕX(t)dt.

Levy-Cramer continuity theorem

Let { X n } \{X_n\} {Xn} be a random sequence with distribution function { F n } \{F_n\} {Fn}. Let X X X be a random variable with distribution F F F. The following are equivalent:

  1. X n X_n Xn converges weakly to X X X.
  2. F n ( x ) F_n(x) Fn(x) converges to F ( x ) F(x) F(x) for every continuous point x ∈ R x\in\mathbb{R} xR.
  3. E [ f ( X ) ] → E [ f ( X ) ] E[f(X)]\rightarrow E[f(X)] E[f(X)]E[f(X)] for every bounded continuous function f f f(equivalently bounded Lipschitz function).
  4. ϕ X n ( t ) \phi_{X_n}(t) ϕXn(t) converges pointwisely to ϕ X ( t ) \phi_X(t) ϕX(t).

Theorem 1

If a function ϕ ( t ) \phi(t) ϕ(t) is continuous at 0 and the limit of the characteristic functions of a random sequence, then ϕ ( t ) \phi(t) ϕ(t) must be the characteristic function of some probability measure.

Markov Inequality

For a non-negative random variable X X X, we have p r ( X ≥ t ) ≤ E [ X ] t . pr(X\geq t)\leq\frac{E[X]}{t}. pr(Xt)tE[X]. An instant collary is
p r ( X ≥ t ) ≤ E [ X p ] t p . pr(X\geq t)\leq\frac{E[X^p]}{t^p}. pr(Xt)tpE[Xp].

proof:
E [ X ] = ∫ 0 ∞ x d F ( x ) ≥ ∫ t ∞ x d F ( x ) ≥ t ⋅ p r ( X ≥ t ) . \begin{aligned} E[X]&=\int_{0}^\infty xdF(x)\\ &\geq \int_{t}^\infty xdF(x)\\ &\geq t \cdot pr(X\geq t). \end{aligned} E[X]=0xdF(x)txdF(x)tpr(Xt).

L p L_p Lp Convergence implies Convergence in prob.

A direct conclusion followed by the Markov Inequality.

Weak Law of Large Numbers (WLLN)

Version 1:
The average mean of uncorrelated random variables with finite mean μ \mu μ and variance converges to μ \mu μ in L 2 L_2 L2.
version 2:
The average mean of i.i.d random variables with finite mean μ \mu μ converges to μ \mu μ in prob…
proof:
ϕ S n n ( t ) = ϕ X n ( t n ) \phi_{\frac{S_n}{n}}(t)=\phi_{X}^n(\frac{t}{n}) ϕnSn(t)=ϕXn(nt). Taylor series lead to
ϕ X ( t n ) = 1 + i μ t n + o ( 1 n ) . \phi_X(\frac{t}{n})=1+\frac{i\mu t}{n}+o(\frac{1}{n}). ϕX(nt)=1+niμt+o(n1).Note ϕ X ( t n ) − 1 → i μ t n \phi_X(\frac{t}{n})-1\rightarrow \frac{i\mu t}{n} ϕX(nt)1niμt. Since ( 1 + i μ t n ) n → e i μ t (1+\frac{i\mu t}{n})^n\rightarrow e^{i\mu t} (1+niμt)neiμt, we have ϕ S n n ( t ) → e i μ t \phi_{\frac{S_n}{n}}(t)\rightarrow e^{i\mu t} ϕnSn(t)eiμt.

Borel-Cantelli Lemma

If ∑ n = 1 ∞ p r ( A n ) < ∞ \sum_{n=1}^\infty pr(A_n)<\infty n=1pr(An)<, we have
p r ( A n i . o ) ( = p r ( lim sup ⁡ n A n ) ) = 0. pr(A_n i.o)(=pr(\limsup_n A_n))=0. pr(Ani.o)(=pr(nlimsupAn))=0.Otherwise if additionally A n A_n An are independent, we have
p r ( A n i . o ) = 1. pr(A_n i.o)=1. pr(Ani.o)=1.

Proposition 2

X n → X X_n\rightarrow X XnX IFF every subsequence of X n X_n Xn has a further subsequence that converges a.s. to X X X.

Strong Law of Large Numbers (SLLN)

The average mean of i.i.d integrable random variables converges to the mean a.s…

Kolmogorov’s Inequality

An improvement of Chebyshev’s Inequality.
p r ( sup ⁡ 1 ≤ k ≤ n ∣ S k ∣ > ε ) < ∑ i = 1 n σ j 2 ϵ 2 . pr(\sup_{1\leq k\leq n}|S_k|>\varepsilon)<\frac{\sum_{i=1}^n\sigma_j^2}{\epsilon^2}. pr(1knsupSk>ε)<ϵ2i=1nσj2.

Levy’s Theorem

If X n X_n Xn are independent, then (i) S n S_n Sn converges weakly; (ii) S n S_n Sn converges in prob.; (iii) S n S_n Sn converges a.s.; are equivalent.

Kolmogorov’s one series Theorem

If X n X_n Xn are independent r.v. with mean 0 and ∑ n = 1 ∞ v a r ( X n ) < ∞ \sum_{n=1}^\infty var(X_n)<\infty n=1var(Xn)<, then S ∞ S_\infty S converges a.s…

Kolmogorov’s three-series theorem

Let X n X_n Xn be independent. Let Y i = X i 1 ∣ X i ∣ ≤ A Y_i=X_i1_{|X_i|\leq A} Yi=Xi1XiA with A > 0 A>0 A>0. Then S ∞ S_\infty S converges a.s. IFF

  1. ∑ n = 1 ∞ p r ( ∣ X n ∣ > A ) < ∞ . \sum_{n=1}^\infty pr(|X_n|>A)<\infty. n=1pr(Xn>A)<.
  2. ∑ n = 1 ∞ E Y n \sum_{n=1}^\infty EY_n n=1EYn converges.
  3. ∑ n = 1 ∞ v a r ( Y n ) < ∞ \sum_{n=1}^\infty var(Y_n)<\infty n=1var(Yn)<.

Kolmogorov’s 0-1 law

Let F n ′ = σ { X n , X n + 1 , ⋯   } \mathcal{F}'_n=\sigma\{X_n,X_{n+1},\cdots\} Fn=σ{Xn,Xn+1,} and T = ∩ n F n ′ \mathcal{T}=\cap_n\mathcal{F}'_n T=nFn be the tail sigma-field. If X n X_n Xn are independent and A ∈ T A\in\mathcal{T} AT then p r ( A ) = 0 pr(A)=0 pr(A)=0 or 1 1 1.

Kronecker’s lemma

If a n ↑ ∞ a_n\uparrow\infty an and ∑ i = 1 ∞ x i a i \sum_{i=1}^\infty \frac{x_i}{a_i} i=1aixi converges then ∑ i = 1 n x i a n \frac{\sum_{i=1}^n x_i}{a_n} ani=1nxi converges.

Central Limit Theorem (CLT)

For i.i.d r.v’s. with mean μ \mu μ and v a r ( X i ) = σ 2 < ∞ var(X_i)=\sigma^2<\infty var(Xi)=σ2<, ( S n − n μ ) n σ \frac{(S_n-n\mu)}{\sqrt{n}\sigma} n σ(Snnμ) converges weakly to the standard normal distribution.

Lindeberg’s Theorem

Let F i F_i Fi be the distribution of X i X_i Xi.
lim ⁡ n 1 s n 2 ∑ i = 1 n ∫ ∣ x ∣ ≥ ϵ s n x 2 d F i ( x ) = 0 \lim_n\frac{1}{s_n^2}\sum_{i=1}^n\int_{|x|\geq \epsilon s_n}x^2dF_i(x)=0 nlimsn21i=1nxϵsnx2dFi(x)=0 for each ϵ > 0 \epsilon>0 ϵ>0 is sufficient for CLT to hold.

Reference

Durrett, Rick. Probability: theory and examples. Vol. 49. Cambridge university press, 2019.
S.R.S.Varadhan, Probability Theory (Courant Lecture Notes), 2000

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
The Cauchy principle of convergence states that a sequence converges if and only if it is a Cauchy sequence. To prove this principle, we need to show two things: 1. If a sequence converges, then it is a Cauchy sequence 2. If a sequence is a Cauchy sequence, then it converges Proof of 1: Let {an} be a convergent sequence. Then there exists a limit L such that for any ε > 0, there exists an N such that for all n > N, |an - L| < ε. Now let ε > 0 be arbitrary. We want to show that there exists an N such that for all n, m > N, |an - am| < ε. Since {an} converges to L, we can choose N1 such that for all n > N1, |an - L| < ε/2. Similarly, we can choose N2 such that for all m > N2, |am - L| < ε/2. Let N = max{N1, N2}. Then for all n, m > N, |an - am| = |an - L + L - am| ≤ |an - L| + |L - am| < ε/2 + ε/2 = ε Thus, {an} is a Cauchy sequence. Proof of 2: Let {an} be a Cauchy sequence. Then for any ε > 0, there exists an N such that for all n, m > N, |an - am| < ε. Since {an} is Cauchy, it is also bounded. Let M be an upper bound on {an}. Now consider the set {an : n ≥ N}. This set is bounded above by M and bounded below by {an - ε : n ≥ N}, since for any n ≥ N, an - ε ≤ an ≤ an + ε Therefore, by the completeness axiom of the real numbers, this set has a supremum, which we will call L. We claim that {an} converges to L. To prove this, let ε > 0 be arbitrary. Since {an} is Cauchy, there exists an N such that for all n, m > N, |an - am| < ε/2. Choose N such that N > N1 and N > N2, where N1 and N2 are the indices corresponding to ε/2 and ε/4, respectively. Then for all n > N, |an - L| ≤ |an - aN| + |aN - L| < ε/2 + ε/4 = 3ε/4 Similarly, for all m > N, |am - L| ≤ |am - aN| + |aN - L| < ε/2 + ε/4 = 3ε/4 Therefore, for all n > N, |an - L| ≤ |an - aN| + |aN - L| < ε/2 + ε/4 = 3ε/4 and for all m > N, |am - L| ≤ |am - aN| + |aN - L| < ε/2 + ε/4 = 3ε/4 Thus, for all n > N, |an - L| < 3ε/4 and for all m > N, |am - L| < 3ε/4 Therefore, for all n > N, |an - L| + |L - am| < ε/2 + ε/2 = ε which shows that {an} converges to L.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值