提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档
文章目录
1.均值(Mean)
A. Definition
Successive Type
The mean of Random Vriable
x
\bm{x}
x is
E
(
x
)
=
∫
−
∞
∞
x
f
(
x
)
d
x
.
E(\bm{x}) = \int_{-\infty}^{\infty} x f(x) \text{d}x.
E(x)=∫−∞∞xf(x)dx.
Discrete Type
Letting random variable
x
x
x with
x
i
x_i
xi having
p
i
p_i
pi, we have
f
(
x
)
=
∑
i
p
i
δ
(
x
−
x
i
)
.
f(x) =\sum_i p_i \delta (x - x_i).
f(x)=i∑piδ(x−xi). Substituting identical eqution
∫
−
∞
∞
x
δ
(
x
−
x
i
)
d
x
=
x
i
\int_{-\infty}^{\infty} x \delta (x-x_i) \text{d} x = x_i
∫−∞∞xδ(x−xi)dx=xi into the above equation, we have
E
(
x
)
=
∑
i
p
i
x
i
,
p
i
=
P
(
x
=
x
i
)
.
E(x) = \sum_i p_i x_i, p_i =P(\bm{x} = x_i).
E(x)=i∑pixi,pi=P(x=xi).
Conditional Mean
Let Conditional Probability Density (CPD) replace PDF
f
(
x
)
f(x)
f(x), the conditional mean of random variable
x
\bm{x}
x with condition M is
E
(
x
∣
M
)
=
∫
−
∞
∞
x
f
(
x
∣
M
)
d
x
E(\bm{x} | M) = \int_{-\infty}^{\infty} x f(x|M) \text{d}x
E(x∣M)=∫−∞∞xf(x∣M)dx
For discrete random variable, the above equation is rewritten as
E
(
x
∣
M
)
=
∑
i
x
i
P
(
x
=
x
i
∣
M
)
.
E(\bm{x} | M) = \sum_i x_i P(\bm{x} = x_i|M).
E(x∣M)=i∑xiP(x=xi∣M).
B. Property
Mean can express the magnitude of direct current, denoted as
E
(
x
)
E(x)
E(x).
For a Gaussian white noise signal, its mean is zero, and it only has alternating components.
Note that the square mean, E ( x ) 2 {E(x)}^2 E(x)2, can express the power of direct components.
2. 均方值(Mean Square Value)
Mean Square Value can express the mean power of a signal,
E
(
x
2
)
E(x^2)
E(x2).
Mean power of a signal = Component of alternating current + Component of direct current
3. 方差(variance)
A. Definition
It can be expressed as variance or deviation or Var.
1)Successive Type
D
(
X
)
=
σ
2
=
∫
−
∞
∞
(
x
−
μ
)
2
f
(
x
)
d
x
D(X) = \sigma^2 = \int_{-\infty}^{\infty} (x-\mu)^2 f(x) \text{d} x
D(X)=σ2=∫−∞∞(x−μ)2f(x)dx where
μ
=
E
(
X
)
\mu=E(X)
μ=E(X).
The above equation can be rewritten as
D
(
X
)
=
∫
x
2
f
(
x
)
d
x
−
μ
2
=
E
(
X
2
)
−
[
E
(
X
)
]
2
.
D(X)=\int x^{2} f(x) d x-\mu^{2}=E\left(X^{2}\right)-[E(X)]^{2}.
D(X)=∫x2f(x)dx−μ2=E(X2)−[E(X)]2.
2)Discrete Type
D ( X ) = ∑ i = 1 N p i ( x i − μ ) 2 = ∑ i = 1 N ( p i x i 2 ) − μ 2 . D(X) = \sum_{i=1}^N p_i (x_i-\mu)^2 = \sum_{i=1}^N (p_i x_i^2) -\mu^2. D(X)=i=1∑Npi(xi−μ)2=i=1∑N(pixi2)−μ2.
In probability theory, variance can measure the deviation extent between random variables and mean.
Variance can describe the deviation extent of a signal and express the strength of alternating components, i.e., mean power of an alternating signal.
B. Property
- Letting C C C be a constant, we have D ( C ) = 0 D(C) = 0 D(C)=0.
- Letting X X X be a random variable, we have D ( C X ) = C 2 D ( X ) D(CX) = C^2 D(X) D(CX)=C2D(X), D ( X + C ) = D ( X ) D(X+C) = D(X) D(X+C)=D(X).
- Letting X , Y X,Y X,Y be random variables, we have D ( X ± Y ) = D ( X ) + D ( Y ) ± 2 C o v ( X , Y ) D(X \pm Y) = D(X)+D(Y) \pm 2 Cov(X,Y) D(X±Y)=D(X)+D(Y)±2Cov(X,Y), where C o v ( X , Y ) = E { [ X − E ( X ) ] [ Y − E ( Y ) ] } Cov (X,Y)=E\{[X-E(X)][Y-E(Y)]\} Cov(X,Y)=E{[X−E(X)][Y−E(Y)]} .
4. 标准差 (Standard Variance)
Standard Deviation is also called Root-Mean-Square Error (RMSE), denoted as σ \sigma σ, which can reflect the discrete degree of a dataset.
Note that Standard Variance can reflect the discrete degree between measuring data and true data, which can be a metric. The smaller the Standard Variance, the higher the accuracy.
5. 协方差 (Covariance)
In statistics and probability theory, Covariance can measure the total error between two variables.
C o v ( X , Y ) = E [ ( X − E ( X ) ) ( Y − E ( Y ) ) ] = E [ X Y ] − 2 E [ X ] E [ Y ] + E [ X ] E [ Y ] = E [ X Y ] − E [ X ] E [ Y ] \begin{array}{ll} Cov(X,Y) &= E \left[ (X-E(X)) (Y-E(Y)) \right]\\ &= E\left[ XY \right] - 2E[X]E[Y] + E[X]E[Y] \\ &=E\left[ XY \right] - E[X]E[Y] \end{array} Cov(X,Y)=E[(X−E(X))(Y−E(Y))]=E[XY]−2E[X]E[Y]+E[X]E[Y]=E[XY]−E[X]E[Y]
A. Meaning
Case 1:
如果两个变量的变化趋势一致,如果其中一个大于自身的期望值时另外一个也大于自身的期望值,那么两个变量之间的协方差就是正值;
Case 2:
如果两个变量的变化趋势相反,即其中一个变量大于自身的期望值时另外一个却小于自身的期望值,那么两个变量之间的协方差就是负值。即协方差具备描述X和Y相关程度的量的功能。
如果X与Y是统计独立的,那么二者之间的协方差就是0,因为两个独立的随机变量满足 E [ X Y ] = E [ X ] E [ Y ] E[XY]=E[X]E[Y] E[XY]=E[X]E[Y]。但是,反过来并不成立。
若两个随机变量X和Y相互独立,则 E [ ( X − E ( X ) ) ( Y − E ( Y ) ) ] = 0 E[(X-E(X))(Y-E(Y))]=0 E[(X−E(X))(Y−E(Y))]=0,因而若上述数学期望不为零,则X和Y必不是相互独立的,亦即它们之间存在着一定的关系。
B. The relationship between Variance and Covariance
D ( X + Y ) = D ( X ) + D ( Y ) + 2 C o v ( X , Y ) D(X+Y)=D(X)+D(Y)+2Cov(X,Y) D(X+Y)=D(X)+D(Y)+2Cov(X,Y) D ( X − Y ) = D ( X ) + D ( Y ) − 2 C o v ( X , Y ) D(X-Y)=D(X)+D(Y)-2Cov(X,Y) D(X−Y)=D(X)+D(Y)−2Cov(X,Y)
协方差与期望值有如下关系:
C
o
v
(
X
,
Y
)
=
E
(
X
Y
)
−
E
(
X
)
E
(
Y
)
Cov(X,Y)=E(XY)-E(X)E(Y)
Cov(X,Y)=E(XY)−E(X)E(Y)
C. 协方差的性质
-
C o v ( X , Y ) = C o v ( Y , X ) Cov(X, Y)=Cov(Y, X) Cov(X,Y)=Cov(Y,X);
-
C o v ( a X , b Y ) = a b C o v ( X , Y ) Cov(aX, bY)=abCov(X,Y) Cov(aX,bY)=abCov(X,Y),( a a a and b b b are constant);
-
C o v ( X 1 + X 2 , Y ) = C o v ( X 1 , Y ) + C o v ( X 2 , Y ) Cov(X_1+X_2, Y)=Cov(X_1, Y)+Cov(X_2, Y) Cov(X1+X2,Y)=Cov(X1,Y)+Cov(X2,Y)
D. 相关系数
相关系数是作为描述两者变量的相关程度的量。
ρ X Y = Cov ( X , Y ) D ( X ) D ( Y ) \rho_{X Y}=\frac{\operatorname{Cov}(X, Y)}{\sqrt{D(X)} \sqrt{D(Y)}} ρXY=D(X)D(Y)Cov(X,Y)
性质
设ρXY是随机变量X和Y的相关系数,则有
(1) ∣ ρ X Y ∣ ≤ 1 ∣\rho_{XY}∣ \leq 1 ∣ρXY∣≤1;
(2) ∣ ρ X Y ∣ = 1 ∣\rho_{XY}∣ = 1 ∣ρXY∣=1充分必要条件为 P { Y = a X + b } = 1 P\{Y=aX+b\}=1 P{Y=aX+b}=1,( a a a and b b b为常数, a ≠ 0 a \neq 0 a=0);
(3)若 ρ X Y = 0 \rho_{XY}=0 ρXY=0,则称X与Y不线性相关。