Chapter 9. Integration with Respect to a Probability Measure1
南京审计大学统计学研究生第一学期课程,《高等概率论》。
欢迎大家来我的github下载源码呀,https://github.com/Berry-Wen/statistics-note-system
Background
Let
(
Ω
,
A
,
P
)
(\Omega,\mathcal{A},P)
(Ω,A,P) be a probability space.
We want to define the expectation, or what is equivalent, the “integral”, of general
r
.
v
.
r.v.
r.v..
We have of course already done this for
r
.
v
.
s
r.v.s
r.v.s on a countable space
Ω
\Omega
Ω.
The general case (for arbitrary
Ω
\Omega
Ω) is more delicate.
Definition 9.1
- A
r
.
v
.
X
r.v. \ X
r.v. X is called simple if it takes on only a finite number of values and hence can be written in the form
X = ∑ i = 1 n a i I A i (1) X = \sum_{i=1}^{n} a_i I_{A_i} \tag{1} X=i=1∑naiIAi(1)
Where a i ∈ R a_i \in \R ai∈R, and A i ∈ A , 1 ≤ i ≤ n A_i \in \mathcal{A},1\le i \le n Ai∈A,1≤i≤n
若 A k ∈ F , k = 1 , 2 , . . . , n A_k \in \mathcal{F},k=1,2,...,n Ak∈F,k=1,2,...,n两两不交,且
∪ k = 1 n A k = Ω , a k ∈ R ^ ( 1 ) , k = 1 , 2 , . . . , n \cup_{k=1}^n A_k = \Omega,a_k \in \hat{\mathcal{R}}^{(1)},k=1,2,...,n ∪k=1nAk=Ω,ak∈R^(1),k=1,2,...,n
则称函数
KaTeX parse error: Undefined control sequence: \notag at position 56: …d w \in \Omega \̲n̲o̲t̲a̲g̲ ̲
为 ( Ω , F ) (\Omega,\mathcal{F}) (Ω,F)上的简单函数
- Such an
X
X
X is clearly measurable; (Why?) 思考这里为什么
X
X
X是可测的
∀ B ∈ B k X − 1 ( B ) = { w : X ( w ) ∈ B } = ∪ a k ∈ B { w : X ( w ) = a k } = ∪ a k ∈ B A k ∈ F 逆象的定义 简单函数的定义 简单函数的定义 简单函数中 A k ∈ F ⇒ ∪ A k ∈ B A k ∈ F \forall B \in \mathcal{B}^k \\ \begin{array}{ll} \begin{aligned} X^{-1}(B) &= \{w: X(w) \in B\} \\ &= \cup_{a_k \in B} \{w:X(w) = a_k\} \\ &= \cup_{a_k \in B} A_k \\ &\in \mathcal{F} \end{aligned} & \begin{array}{ll} \text{逆象的定义} \\ \text{简单函数的定义} \\ \text{简单函数的定义} \\ \text{简单函数中} A_k \in \mathcal{F} \Rightarrow \cup_{A_k \in B} A_k \in \mathcal{F} \end{array} \end{array} ∀B∈BkX−1(B)={w:X(w)∈B}=∪ak∈B{w:X(w)=ak}=∪ak∈BAk∈F逆象的定义简单函数的定义简单函数的定义简单函数中Ak∈F⇒∪Ak∈BAk∈F
由定理8.1可知 X X X可测。
-
Conversely, if X X X is measurable and takes on the values a 1 , . . . , a n a_1,...,a_n a1,...,an it must have the representation (1) with A i = { X = a i } A_i = \{X=a_i\} Ai={X=ai} ;
-
A simple r . v . r.v. r.v. has of course many different representation of the form (1).
-
If X X X is simple, its expectation (or “integral” with respect to P P P) is the number
E { X } = ∑ i = 1 n a i P ( A i ) (2) E\{X\} = \sum_{i=1}^{n} a_i P(A_i) \tag{2} E{X}=i=1∑naiP(Ai)(2)- This is also written ∫ X ( w ) P ( d w ) \int X(w) P(dw) ∫X(w)P(dw) and even more simply ∫ X d P \int X dP ∫XdP;
- A little algebra shows that E { X } E\{X\} E{X} does not depend on the particular representation (1) chosen for X X X. 练习:
Exercise
Let
(
Ω
,
A
,
P
)
(\Omega,\mathcal{A},P)
(Ω,A,P) be a probability space
Let
X
:
Ω
→
R
X:\Omega \to \R
X:Ω→R be such that it admits two representations
X
=
∑
i
=
1
n
a
i
I
A
i
and
X
=
∑
j
=
1
m
b
j
I
B
j
X = \sum_{i=1}^{n} a_i I_{A_i} \quad \text{and} \quad X = \sum_{j=1}^{m} b_j I_{B_j}
X=i=1∑naiIAiandX=j=1∑mbjIBj
where
a
i
,
b
j
∈
R
a_i,b_j \in \R
ai,bj∈R , and
A
i
,
B
j
∈
A
A_i,B_j \in \mathcal{A}
Ai,Bj∈A for all
i
,
j
.
i,j.
i,j. Show that
∑
i
=
1
n
a
i
P
(
A
i
)
=
∑
j
=
1
m
b
j
P
(
B
j
)
\sum_{i=1}^{n} a_i P(A_i) = \sum_{j=1}^{m} b_j P(B_j)
i=1∑naiP(Ai)=j=1∑mbjP(Bj)
First, prove that ∪ i = 1 n A i = ∪ j = 1 m B j \cup_{i=1}^{ n}A_i = \cup_{j=1}^{m}B_j ∪i=1nAi=∪j=1mBj.
Assume
a i ≠ 0 A i ∩ A j = ∅ b j ≠ 0 B i ∩ B j = ∅ ( i ≠ j ) \begin{array}{lll} a_i \neq 0 & A_i \cap A_j = \empty \\ b_j \neq 0 & B_i \cap B_j = \empty \end{array} (i\neq j) ai=0bj=0Ai∩Aj=∅Bi∩Bj=∅(i=j)∀ w ∈ ∪ i = 1 n A i ∃ i 0 ∈ { 1 , 2 , . . . , n } s . t . X ( w ) = a i 0 ≠ 0 s o w ∈ ∪ j = 1 m B j e l s e X ( w ) = 0 ∴ ∪ i = 1 n A i ⊂ ∪ j = 1 m B j ∀ w ∈ ∪ j = 1 m B j ∃ j 0 ∈ { 1 , 2 , . . . , n } s . t . X ( w ) = b j 0 ≠ 0 s o w ∈ ∪ i = 1 n A i e l s e X ( w ) = 0 ∴ ∪ j = 1 m B j ⊂ ∪ i = 1 n A i ⇓ ∪ j = 1 m B j = ∪ i = 1 n A i \begin{array}{lll} \hline \begin{array}{lll} \forall w \in \cup_{i=1}^{n} A_i \\ \exists i_0 \in \left\{ 1,2,...,n \right\} \\ s.t. \ X(w) = a_{i_0} \neq 0 \\ so \ w \in \cup_{j=1}^{m}B_j \\ \quad else \ X(w) = 0 \\ \therefore \ \cup_{i=1}^{n}A_i \subset \cup_{j=1}^{m}B_j \\ \end{array} & \begin{array}{lll} \forall w \in \cup_{j=1}^{m}B_j \\ \exists j_0 \in \left\{ 1,2,...,n \right\} \\ s.t. \ X(w) = b_{j_0} \neq 0 \\ so \ w \in \cup_{i=1}^{n}A_i \\ \quad else \ X(w) = 0 \\ \therefore \ \cup_{j=1}^{m}B_j \subset \cup_{i=1}^{n}A_i \end{array} \Downarrow \\ \cup_{j=1}^{m}B_j = \cup_{i=1}^{n}A_i\\ \hline \end{array} ∀w∈∪i=1nAi∃i0∈{1,2,...,n}s.t. X(w)=ai0=0so w∈∪j=1mBjelse X(w)=0∴ ∪i=1nAi⊂∪j=1mBj∪j=1mBj=∪i=1nAi∀w∈∪j=1mBj∃j0∈{1,2,...,n}s.t. X(w)=bj0=0so w∈∪i=1nAielse X(w)=0∴ ∪j=1mBj⊂∪i=1nAi⇓
Second, if A i B j ≠ ∅ A_i B_j \neq \emptyset AiBj=∅, for w ∈ A i B j X ( w ) = a i = b j w \in A_i B_j \quad X(w) = a_i = b_j w∈AiBjX(w)=ai=bj
X = ∑ i = 1 n a i I A i = ∑ i = 1 n a i I A i ∩ ( ∪ i = 1 n A i ) = ∑ i = 1 n a i I A i ∩ ( ∪ j = 1 n B j ) = ∑ i = 1 n a i I ∪ j = 1 m A i B j = ∑ i = 1 n ∑ j = 1 m a i I A i B j X = ∑ j = 1 m b j I B j = ∑ j = 1 m b j I B j ∩ ( ∪ j = 1 m B j ) = ∑ j = 1 m b j I B j ∩ ( ∪ i = 1 n A i ) = ∑ j = 1 m b j I ∪ i = 1 n A i B j = ∑ i = 1 n ∑ j = 1 m b j I A i B j ⇓ for w ∈ A i B j ≠ ∅ , X ( w ) = a i = b j \begin{array}{lll} \hline \begin{aligned} X &= \sum_{i=1}^{n} a_i I_{A_i} \\ &= \sum_{i=1}^{n} a_i I_{A_i \cap ( \cup_{i=1}^n A_i)} \\ &= \sum_{i=1}^{n} a_i I_{A_i \cap ( \cup_{j=1}^n B_j)} \\ &= \sum_{i=1}^{n} a_i I_{ \cup_{j=1}^m A_i B_j} \\ &= \sum_{i=1}^{n} \sum_{j=1}^{m} a_i I_{A_i B_j} \\ \end{aligned} & \begin{aligned} X &= \sum_{j=1}^{m} b_j I_{B_j} \\ &= \sum_{j=1}^{m} b_j I_{B_j \cap ( \cup_{j=1}^m B_j)} \\ &= \sum_{j=1}^{m} b_j I_{B_j \cap ( \cup_{i=1}^n A_i)} \\ &= \sum_{j=1}^{m} b_j I_{ \cup_{i=1}^n A_i B_j} \\ &= \sum_{i=1}^{n} \sum_{j=1}^{m} b_j I_{A_i B_j} \\ \end{aligned}\\ \Downarrow \\ \text{for } w \in A_iB_j\neq \emptyset, X(w) = a_i = b_j \\ \hline \end{array} X=i=1∑naiIAi=i=1∑naiIAi∩(∪i=1nAi)=i=1∑naiIAi∩(∪j=1nBj)=i=1∑naiI∪j=1mAiBj=i=1∑nj=1∑maiIAiBj⇓for w∈AiBj=∅,X(w)=ai=bjX=j=1∑mbjIBj=j=1∑mbjIBj∩(∪j=1mBj)=j=1∑mbjIBj∩(∪i=1nAi)=j=1∑mbjI∪i=1nAiBj=i=1∑nj=1∑mbjIAiBj
如果 A i B j = ∅ A_iB_j = \empty AiBj=∅ , a i P ( A i B j ) = b j P ( A i B j ) = 0 a_iP(A_iB_j)=b_jP(A_iB_j)=0 aiP(AiBj)=bjP(AiBj)=0,不影响计算
Last,Prove E X = E Y EX=EY EX=EY
E X = ∑ i = 1 n a i P ( A i ) = ∑ i = 1 n a i P ( A i ∩ ( ∪ i = 1 n A i ) ) = ∑ i = 1 n a i P ( A i ∩ ( ∪ j = 1 m B j ) ) = ∑ i = 1 n a i P ( ∪ j = 1 m A i B j ) = ∑ i = 1 n ∑ j = 1 m a i P ( A i B j ) E Y = ∑ j = 1 m b j P ( B j ) = ∑ j = 1 m b j P ( B j ∩ ( ∪ j = 1 m B j ) ) = ∑ j = 1 m b j P ( B j ∩ ( ∪ i = 1 n A i ) ) = ∑ j = 1 m b j P ( ∪ i = 1 n A i B j ) pairwise disjoint = ∑ i = 1 n ∑ j = 1 m b j P ( A i B j ) ⇓ ∵ a i = b j E X = E Y \begin{array}{lll} \hline \begin{aligned} EX &= \sum_{i=1}^{n} a_i P(A_i) \\ &= \sum_{i=1}^{n} a_i P(A_i \cap (\cup_{i=1}^{n}A_i)) \\ &= \sum_{i=1}^{n} a_i P(A_i \cap ( \cup_{j=1}^{m} B_j)) \\ &= \sum_{i=1}^{n} a_i P( \cup_{j=1}^{m}A_i B_j) \\ &= \sum_{i=1}^{n} \sum_{j=1}^{m} a_i P(A_i B_j) \\ \end{aligned} & \begin{aligned} EY &= \sum_{j=1}^{m} b_j P(B_j) \\ &= \sum_{j=1}^{m} b_j P(B_j \cap (\cup_{j=1}^m B_j)) \\ &= \sum_{j=1}^{m} b_j P(B_j \cap ( \cup_{i=1}^{n}A_i)) \\ &= \sum_{j=1}^{m} b_j P( \cup_{i=1}^{n}A_i B_j) \ \text{ pairwise disjoint}\\ &= \sum_{i=1}^{n} \sum_{j=1}^{m} b_j P(A_i B_j) \\ \end{aligned} \\ \Downarrow \because a_i = b_j \\ EX=EY \\ \hline \end{array} EX=i=1∑naiP(Ai)=i=1∑naiP(Ai∩(∪i=1nAi))=i=1∑naiP(Ai∩(∪j=1mBj))=i=1∑naiP(∪j=1mAiBj)=i=1∑nj=1∑maiP(AiBj)⇓∵ai=bjEX=EYEY=j=1∑mbjP(Bj)=j=1∑mbjP(Bj∩(∪j=1mBj))=j=1∑mbjP(Bj∩(∪i=1nAi))=j=1∑mbjP(∪i=1nAiBj) pairwise disjoint=i=1∑nj=1∑mbjP(AiBj)
Remark 测度与概率—2.3节 期望与积分 - 知乎 (zhihu.com)
-
Let X , Y X,Y X,Y be two simple r . v . s r.v.s r.v.s and β \beta β a real number. We clearly have
X = ∑ i = 1 n a i I A i Y = ∑ j = 1 m b j I B j X = \sum_{i=1}^n a_i I_{A_i} \quad Y = \sum_{j=1}^m b_j I_{B_j} X=∑i=1naiIAiY=∑j=1mbjIBj
E X = ∑ i = 1 n a i P ( A i ) E Y = ∑ j = 1 m b j P ( B j ) EX = \sum_{i=1}^n a_i P(A_i) \quad EY = \sum_{j=1}^m b_j P(B_j) EX=∑i=1naiP(Ai)EY=∑j=1mbjP(Bj)-
E { β X } = β E { X } E\{\beta X\} = \beta E\{X\} E{βX}=βE{X};
E { β X } = ∑ i = 1 n β a i P ( A i ) = β ∑ i = 1 n a i P ( A i ) = β E { X } E\{\beta X\}=\sum_{i=1}^n \beta a_i P(A_i)=\beta \sum_{i=1}^n a_i P(A_i)=\beta E\{X\} E{βX}=∑i=1nβaiP(Ai)=β∑i=1naiP(Ai)=βE{X}
-
E { X + Y } = E { X } + E { Y } E\{X+Y\}=E\{X\}+E\{Y\} E{X+Y}=E{X}+E{Y};
E { X + Y } = ∑ i = 1 n ∑ j = 1 m ( a i + b j ) P ( A i B j ) = ∑ i = 1 n ∑ j = 1 m a i P ( A i B j ) + ∑ i = 1 n ∑ j = 1 m b j P ( A i B j ) = ∑ i = 1 n a i P ( A i ) + ∑ j = 1 m b j P ( B j ) = E { X } + E { Y } \begin{aligned} E\{X+Y\} &= \sum_{i=1}^n \sum_{j=1}^m (a_i + b_j) P(A_i B_j) \\ &= \sum_{i=1}^n \sum_{j=1}^m a_i P(A_i B_j) + \sum_{i=1}^n \sum_{j=1}^m b_j P(A_i B_j) \\ &= \sum_{i=1}^n a_i P(A_i) + \sum_{j=1}^m b_j P(B_j) \\ &= E\{X\} +E\{Y\} \end{aligned} E{X+Y}=i=1∑nj=1∑m(ai+bj)P(AiBj)=i=1∑nj=1∑maiP(AiBj)+i=1∑nj=1∑mbjP(AiBj)=i=1∑naiP(Ai)+j=1∑mbjP(Bj)=E{X}+E{Y}
-
If X ≤ Y X\le Y X≤Y,then E { X } ≤ E { Y } E\{X\}\le E\{Y\} E{X}≤E{Y}.
X ≤ Y ⇒ a i ≤ b j X\le Y \Rightarrow a_i \le b_j X≤Y⇒ai≤bj
E { X } = ∑ i = 1 n a i P ( A i ) = ∑ i = 1 n ∑ j = 1 m a i P ( A i B j ) ≤ ∑ i = 1 n ∑ j = 1 m b j P ( A i B j ) = ∑ j = 1 m b j P ( B j ) = E { Y } \begin{aligned} E\{X\} = \sum_{i=1}^n a_i P(A_i) &= \sum_{i=1}^n \sum_{j=1}^m a_i P(A_i B_j) \\ &\le \sum_{i=1}^n \sum_{j=1}^m b_j P(A_i B_j) \\ &= \sum_{j=1}^m b_j P( B_j) \\ &= E\{Y\} \end{aligned} E{X}=i=1∑naiP(Ai)=i=1∑nj=1∑maiP(AiBj)≤i=1∑nj=1∑mbjP(AiBj)=j=1∑mbjP(Bj)=E{Y}
-
-
Thus, expectation is linear on the vector space of all simple r . v . s r.v.s r.v.s.
因此,对于向量空间中的简单随机变量,期望是线性的。
-
Next, we define expectation for positive r . v . s r.v.s r.v.s.
定义正的随机变量For X X X positive,
-
By this, we assume that X X X may take all values in [ 0 , ∞ ] [0,\infty] [0,∞], including + ∞ +\infty +∞;
在这种假设下,随机变量 X X X的取值为 [ 0 , ∞ ] [0,\infty] [0,∞] -
This innocuous extension is necessary for the coherence of some of our further results.
这种无害的扩展对于我们某些进一步结果的一致性是必需的。Let
E { X } = sup { E { Y } : Y a simple r.v. with 0 ≤ Y ≤ X } (3) E\{X\} = \sup \{E\{Y\}: Y \text{ a simple r.v. with } 0\le Y \le X\} \tag{3} E{X}=sup{E{Y}:Y a simple r.v. with 0≤Y≤X}(3) -
This supremum always exists in [ 0 , ∞ ] [0,\infty] [0,∞].
这个上确界在 [ 0 , ∞ ] [0,\infty] [0,∞] 上总是存在的Since expectation is a positive operator on the set of simple r . v . ′ s r.v.'s r.v.′s,
既然期望是在简单随机变量集合上的正的运算it is clear that the definition above for E { X } E\{X\} E{X} coincides with Definition 9.1.
则上面的关于期望的定义和定义9.1是一致的定义9.1里面的关于期望的定义为 E { X } = ∑ i = 1 n a i P ( A i ) E\{X\} = \sum_{i=1}^{n} a_i P(A_i) E{X}=∑i=1naiP(Ai)
思考这里的一致性是为什么?
-
Remark
-
Note that E { X } ≥ 0 E\{X\}\ge 0 E{X}≥0,but we can have E { X } = ∞ E\{X\}=\infty E{X}=∞,even when X X X is never equal + ∞ +\infty +∞.
注意到 E { X } ≥ 0 E\{X\}\ge 0 E{X}≥0 ,但是我们可以有 E { X } = ∞ E\{X\}=\infty E{X}=∞,即使随机变量 X X X不等于无穷。 -
Finally, let X X X be an arbitrary r . v . r.v. r.v..
最后,令 X X X 为任意的随机变量Let X + = m a x ( X , 0 ) X − = − m i n ( X , 0 ) X^+ = max(X,0) \quad X^- = - min(X,0) X+=max(X,0)X−=−min(X,0).
Then
X = X + − X − ∣ X ∣ = X + + X − X = X^+ - X^- \quad |X| = X^+ + X^- X=X+−X−∣X∣=X++X−
and X + , X − X^+,X^- X+,X− are positive r . v . s r.v.s r.v.s.
Definition 9.2
-
A r . v . X r.v. \ X r.v. X has a finite expectation (is “integrable”) if both E { X + } E\{X^+\} E{X+} and E { X − } E\{X^-\} E{X−} are finite.
若 E { X + } E\{X^+\} E{X+} 和 E { X − } E\{X^-\} E{X−} 都是有限的,则随机变量 X X X 期望有限(也叫做可积)In this case, its expectation is the number 期望是两数之和
E { X } = E { X + } + E { X − } (4) E\{X\} = E\{X^+\} + E\{X^-\} \tag{4} E{X}=E{X+}+E{X−}(4)
also written ∫ X ( w ) d P ( w ) \int X(w) dP(w) ∫X(w)dP(w) or ∫ X d P \int XdP ∫XdP
- If
X
>
0
X>0
X>0 then
X
−
=
0
X^- =0
X−=0 and
X
+
=
X
X^+=X
X+=X and, since obviously
E
{
0
}
=
0
E\{0\}=0
E{0}=0,this definition coincides with (3)
如果 X > 0 X>0 X>0 ,则 X − = 0 X^- =0 X−=0 、 X + = X X^+=X X+=X 、 E { 0 } = 0 E\{0\}=0 E{0}=0,在这种情况下,和式(3)一致
We write
L
1
\mathcal{L}^1
L1 to denote the set of all integrable
r
.
v
.
s
r.v.s
r.v.s. (Sometimes we write
L
1
(
Ω
,
A
,
P
)
\mathcal{L}^1 (\Omega,\mathcal{A},P)
L1(Ω,A,P) to remove any possible ambiguity.)
用
L
1
\mathcal{L}^1
L1 代表所有可积的随机变量的集合
-
A r . v . X r.v. \ X r.v. X admits an expectation if E { X + } E\{X^+\} E{X+} and E { X − } E\{X^-\} E{X−} are not both equal to + ∞ +\infty +∞.
随机变量 X X X有期望,则 E { X + } E\{X^+\} E{X+} 和 E { X − } E\{X^-\} E{X−} 不都等于正无穷。- Then the expectation of
X
X
X is still given by (4), with the conventions
+
∞
+
a
=
+
∞
+\infty+a=+\infty
+∞+a=+∞ and
−
∞
+
a
=
−
∞
-\infty+a=-\infty
−∞+a=−∞ when
a
∈
R
a\in \R
a∈R.
则 X X X的期望还是和(4)式一样,因为 + ∞ + a = + ∞ +\infty+a=+\infty +∞+a=+∞ 、 − ∞ + a = − ∞ -\infty+a=-\infty −∞+a=−∞ a ∈ R a\in \R a∈R. - If
X
≥
0
X\ge 0
X≥0 this definition again coincides with (3)
如果 X ≥ 0 X\ge 0 X≥0 ,则(4)和(3)是一致的 - Note that if
X
X
X admits an expectation, then
E
{
X
}
∈
[
−
∞
,
+
∞
]
E\{X\} \in [-\infty,+\infty]
E{X}∈[−∞,+∞], and
X
X
X is integrable if and only if its expectation is finite.
如果 X X X有期望,则 E { X } ∈ [ − ∞ , + ∞ ] E\{X\} \in [-\infty,+\infty] E{X}∈[−∞,+∞],
X X X 可积 ⇔ \Leftrightarrow ⇔ 期望有限
- Then the expectation of
X
X
X is still given by (4), with the conventions
+
∞
+
a
=
+
∞
+\infty+a=+\infty
+∞+a=+∞ and
−
∞
+
a
=
−
∞
-\infty+a=-\infty
−∞+a=−∞ when
a
∈
R
a\in \R
a∈R.
Remark 9.1
When
Ω
\Omega
Ω is finite or countable we have thus two different definitions for the expectation of a
r
.
v
.
X
r.v. \ X
r.v. X, the one above and the one given in Chapter 5.
当
Ω
\Omega
Ω 是有限或可数,则有两个不同的关于随机变量
X
X
X 的期望定义,一个是上面给出的,另一个是第五章给出的
In fact these two definitions coincides: it is enough to verify this for a simple
r
.
v
.
X
r.v. \ X
r.v. X, and in this case the formulas (5.1) and (9.2) are identical.
事实上,这两种定义是一致的
E { X } = ∑ j ∈ T ′ j P ( X = j ) (5.1) E\{X\} = \sum_{j \in T'} j P(X=j) \tag{5.1} \\ E{X}=j∈T′∑jP(X=j)(5.1)
E { X } = ∑ i = 1 n a i P ( A i ) (9.2) E\{X\} = \sum_{i=1}^n a_i P(A_i) \tag{9.2} E{X}=i=1∑naiP(Ai)(9.2)
这个留作练习,下次课讲
Theorem 9.1
- (a)
L
1
\mathcal{L}^1
L1 is a vector space, and expectation is a linear map on
L
1
\mathcal{L}^1
L1, and it is also positive (
i
.
e
.
X
≥
0
⇒
E
{
X
}
≥
0
i.e. X \ge 0 \Rightarrow E\{X\}\ge 0
i.e.X≥0⇒E{X}≥0).
L 1 \mathcal{L}^1 L1 是一个向量空间,期望是 L 1 \mathcal{L}^1 L1上的线性映射,它也是正的。
If further 0 ≤ X ≤ Y 0\le X \le Y 0≤X≤Y are two r . v . s r.v.s r.v.s and Y ∈ L 1 Y \in \mathcal{L}^1 Y∈L1 and E { X } ≤ E { Y } E\{X\}\le E\{Y\} E{X}≤E{Y}.
-
(b) X ∈ L 1 X\in \mathcal{L}^1 X∈L1 iff ∣ X ∣ ∈ L 1 |X| \in \mathcal{L}^1 ∣X∣∈L1 and in this case ∣ E { X } ∣ ≤ E { ∣ X ∣ } |E \left\{ X \right\}| \le E \left\{ |X| \right\} ∣E{X}∣≤E{∣X∣}.
In particular any bounded r . v . r.v. r.v. is integrable.
-
© If X = Y X = Y X=Y almost surely ( s . a . s.a. s.a.) , then E { X } = E { Y } E \left\{ X \right\}= E \left\{ Y \right\} E{X}=E{Y}.
X = Y a.s. if P ( X = Y ) = P ( { w : X ( w ) = Y ( w ) } ) = 1 X = Y \text{ a.s. if } P(X=Y)=P(\{w:X(w)=Y(w)\})=1 X=Y a.s. if P(X=Y)=P({w:X(w)=Y(w)})=1
-
(d) (Monotone convergence theorem): 单调收敛定理
If the r . v . s r.v.s r.v.s X n X_n Xn are positive and increasing a . s . a.s. a.s. to X X X, then lim n → ∞ E { X n } = E { X } \lim_{n\to \infty}E \left\{ X_n \right\} = E \left\{ X \right\} limn→∞E{Xn}=E{X} (even if E { X } = ∞ E \left\{ X \right\}= \infty E{X}=∞).
-
(e) (Fatou’s lemma):
If the r . v . s r.v.s r.v.s X n X_n Xn satisfy X n > Y X_n > Y Xn>Y a . s . a.s. a.s. ( Y ∈ L 1 Y \in \mathcal{L}^1 Y∈L1), all n n n, we have
E { lim n → ∞ inf E { X n } } ≤ lim n → ∞ inf E { X n } E \left\{ \lim_{n \to \infty} \inf E \left\{ X_n \right\} \right\} \le \lim_{n \to \infty} \inf E \left\{ X_n \right\} E{n→∞liminfE{Xn}}≤n→∞liminfE{Xn}
In particular, if X n ≥ 0 X_n \ge 0 Xn≥0 a . s . a.s. a.s. then
E { lim n → ∞ inf X n } ≤ lim n → ∞ inf E { X n } E \left\{ \lim_{n \to \infty} \inf X_n \right\} \le \lim_{n \to \infty} \inf E \left\{ X_n \right\} E{n→∞liminfXn}≤n→∞liminfE{Xn} -
(f) (Lebesgue’s dominated convergence theorem): 勒贝格控制收敛定理
If the r . v . s r.v.s r.v.s X n X_n Xn converge a . s . a.s. a.s. to X X X and if ∣ X n ∣ ≤ Y a . s . ∈ L 1 |X_n|\le Y \ a.s. \in \mathcal{L}^1 ∣Xn∣≤Y a.s.∈L1, all n n n,
then X n ∈ L 1 , X ∈ L 1 X_n \in \mathcal{L}^1, X \in \mathcal{L}^1 Xn∈L1,X∈L1 and E { X n } → E { X } E \left\{ X_n \right\} \to E \left\{ X \right\} E{Xn}→E{X}.
Statement
-
The a . s . a.s. a.s. equality between r . v . s r.v.s r.v.s is clearly an equivalence relation, and two equivalent (i.e. almost surely equal) r.v.s have the same expectation:
在随机变量间的几乎必然相等时一个等价关系,两个几乎必然相等的随机变量具有相同的表示。Thus :
one can define a space L 1 L^1 L1 by considering " L 1 \mathcal{L}^1 L1 modulo this equivalence relation"
-
In other words, an element of L 1 L^1 L1 is an equivalence class, that is a collection of all r . v . r.v. r.v. in L 1 \mathcal{L}^1 L1 which are pairwise a . s . a.s. a.s. equal.
L 1 L^1 L1 是一个相等的类,是所有在 L 1 \mathcal{L}^1 L1 上的两两相等的类 -
In view of © above, one may speak of the “expectation” of the equivalence class (which is the expectation of any one element belonging to this class).
鉴于上面的(c),可以说等价类的“期望”(对属于该类的任何一个元素的期望)。 -
Since further the addition of r . v . s r.v.s r.v.s or the product of a r . v . r.v. r.v. by a constant preserve a . s . a.s. a.s. equality, the set L 1 L^1 L1 is also a vector space.
随机变量的加法或者乘法by a constant preserve a . s . a.s. a.s. equality,则 L 1 L^1 L1 集合也是一个向量空间。Therefore, we commit the (innocuous) abuse of identifying a r . v . r.v. r.v. with its equivalence class, and commonly write X ∈ L 1 X \in L^1 X∈L1 instead of X ∈ L 1 X \in \mathcal{L}^1 X∈L1.
-
If 1 ≤ p < ∞ 1\le p < \infty 1≤p<∞, we define L p \mathcal{L}^p Lp to be the space of r . v . s r.v.s r.v.s such that ∣ X ∣ p ∈ L 1 |X|^p \in \mathcal{L}^1 ∣X∣p∈L1 ;
L p L^p Lp is defined analogously to L 1 L^1 L1.
L p L^p Lp和 L 1 L^1 L1的定义类似That is, L p L^p Lp is L p \mathcal{L}^p Lp modulo the equivalence relation “almost surely”.
-
Put more simply, two elements of L p \mathcal{L^p} Lp that are a . s . a.s. a.s. equal are considered to be representatives of one element of L p L^p Lp.
Two auxiliary results.
Result 1
For every positive r.v. X X X there exists a sequence { X n } n ≥ 1 \{X_n\}_{n\ge 1} {Xn}n≥1 of positive simple r.v.s which increases toward X X X as n n n increases to infinity.
An example of such a sequence is given by 为什么要定义为
k
2
n
\frac{k}{2^n}
2nk
X
n
(
w
)
=
{
k
2
n
i
f
k
2
n
≤
X
(
w
)
<
k
+
1
2
n
a
n
d
0
≤
k
≤
n
2
n
−
1
n
i
f
X
(
w
)
≥
n
X_n(w) = \left\{ \begin{array}{lll} \frac{k}{2^n} & if \ \frac{k}{2^n} \le X(w) < \frac{k+1}{2^n} \ and \ 0 \le k \le n2^{n}-1 \\ n & if \ X(w) \ge n \end{array} \right.
Xn(w)={2nknif 2nk≤X(w)<2nk+1 and 0≤k≤n2n−1if X(w)≥n
Result 2
If X X X is a positive r.v., and if { X n } n ≥ 1 \{X_n\}_{n\ge 1} {Xn}n≥1 is any sequence of positive simple r.v.s increasing to X X X, then E { X n } E\{X_n\} E{Xn} increases to E { X } E\{X\} E{X}.