Selected Review from Mathematical Statistics
Selected Review from Mathematical Statistics
The Major section of this brief review list is modified or extracted based on the online material posted by Penn State University Eberly College of Science and notes from Mathematical Statistics and Data Analysis textbook by John A. Rice, University of California, Berkeley.
Note:
This post is for personal review only! If you wanna forward this post for other usage, please ask the editor/author/organization listed above.
1. Discrete Random Variable
1. 1 Definition:
A discrete random variable is a random variable that can take on only a finite or at most a countably infinite number of values.
1. 2 Probability mass function (pmf)
For discrete random variable, if there is a function p p p such that p ( x i ) = P ( X = x i ) p(x_{i}) = P(X=x_{i}) p(xi)=P(X=xi) and ∑ i p ( x i ) = 1 \sum\nolimits_{i}p(x_{i})=1 ∑ip(xi)=1, then this function is called the probability mass function, or the frequency function of the random variable X X X.
1. 3 Cumulative distribution function (cdf)
Cumulative distribution function
c
d
f
cdf
cdf of a random variable, which is defined to be
F
(
x
)
=
P
(
X
≤
x
)
,
−
∞
<
x
<
∞
F(x)=P(X \leq x), \qquad -\infty<x<\infty
F(x)=P(X≤x),−∞<x<∞The cumulative distribution function is non-decreasing and satisfies:
lim
x
→
−
∞
F
(
x
)
=
0
a
n
d
lim
x
→
∞
F
(
x
)
=
1
\lim_{x\to-\infty}F(x)=0 \quad and \quad \lim_{x\to\infty}F(x)=1
x→−∞limF(x)=0andx→∞limF(x)=1
1. 4 Independent Variable
In the case of two discrete random variables
X
X
X and
Y
Y
Y, taking on possible values
x
1
,
x
2
,
…
,
x_{1}, x_{2}, \ldots ,
x1,x2,…, and
y
1
,
y
2
,
…
y_{1}, y_{2}, \ldots
y1,y2,…,
X
X
X and
Y
Y
Y are said to be independent if, for all
i
i
i and
j
j
j:
P
(
X
=
x
i
a
n
d
Y
=
y
j
)
=
P
(
X
=
x
i
)
P
(
Y
=
y
j
)
P(X=x_{i}\ and\ Y=y_{j} )=P(X=x_{i})P(Y=y_{j})
P(X=xi and Y=yj)=P(X=xi)P(Y=yj)The definition is extended to collections of more than two discrete random variablesin the obvious way; for example,
X
X
X,
Y
Y
Y, and
Z
Z
Z are said to be mutually independent if, for all
i
i
i,
j
j
j, and
k
k
k:
P
(
X
=
x
i
,
Y
=
y
j
,
Z
=
z
k
)
=
P
(
X
=
x
i
)
P
(
Y
=
y
j
)
P
(
Z
=
z
k
)
P(X=x_{i},\ Y=y_{j},\ Z=z_{k} )=P(X=x_{i})P(Y=y_{j})P(Z=z_{k})
P(X=xi, Y=yj, Z=zk)=P(X=xi)P(Y=yj)P(Z=zk)
1. 5 Discrete Random Variable Distribution
1. 5. 1 Bernoulli Distribution
A Bernoulli random variable takes on only two values: 0 and 1, with probabilities $ 1-p $ and
p
p
p, respectively. Its frequency function is thus
p
(
1
)
=
p
p
(
0
)
=
1
−
p
p
(
x
)
=
0
,
i
f
x
≠
0
a
n
d
x
≠
1
\begin{aligned} p(1) = &\ p \\ p(0) = &\ 1-p \\ p(x) = &\ 0,\ if\ x \neq 0\ and\ x \neq 1 \end{aligned}
p(1)=p(0)=p(x)= p 1−p 0, if x=0 and x=1An alternative and sometimes useful representation of this function is
p
(
x
)
=
{
p
x
(
1
−
p
)
1
−
x
i
f
x
=
0
o
r
x
=
1
0
o
t
h
e
r
w
i
s
e
p(x)= \left\{\begin{array}{lcl} p^{x}(1-p)^{1-x} & & {if\ x = 0\ or\ x = 1}\\ 0 & & {otherwise} \end{array}\right.
p(x)={px(1−p)1−x0if x=0 or x=1otherwise If
A
A
A is an event, then the indicator random variable,
I
A
I_{A}
IA, takes on the value 1 if
A
A
A occurs and the value 0 if
A
A
A does not occur:
I
A
(
ω
)
=
{
1
,
i
f
ω
∈
A
0
,
o
t
h
e
r
w
i
s
e
I_{A}(\omega) = \left\{\begin{array}{lcl} 1, & & {if\ \omega \in A}\\ 0, & & {otherwise} \end{array}\right.
IA(ω)={1,0,if ω∈Aotherwise
I
A
I_{A}
IA is a Bernoulli random variable. In applications, Bernoulli random variables often occur as indicators. A Bernoulli random variable might take on the value 1 or 0 according to whether a guess was a success or a failure.
1. 5. 2 Binomial Distribution
Suppose that n n n independent experiments, or trials, are performed, where n n n is a fixed number, and that each experiment results in a “success” with probability p p p and a “failure” with probability 1 − p 1-p 1−p. The total number of successes, X X X, is a binomial random variable with parameters n n n and p p p.
The probability that
X
=
k
X = k
X=k, or
p
(
k
)
p(k)
p(k), can be found in the following way: Any particular sequence of k successes occurs with probability
p
k
(
1
−
p
)
n
−
k
p^{k}(1-p)^{n-k}
pk(1−p)n−k , from the multiplication principle. The total number of such sequences is
(
n
k
)
{n\choose k}
(kn) , since there are
(
n
k
)
{n\choose k}
(kn) ways to assign
k
k
k successes to
n
n
n trials.
P
(
X
=
k
)
P(X = k)
P(X=k) is thus the probability of any particular sequence times the number of such sequences:
P
(
X
=
k
)
=
(
n
k
)
p
k
(
1
−
p
)
n
−
k
P(X = k)={n\choose k}p^{k}(1-p)^{n-k}
P(X=k)=(kn)pk(1−p)n−kA random variable with a binomial distribution can be expressed in terms of independent Bernoulli random variables.
Specifically, let
X
1
,
X
2
,
…
,
X
n
X_{1}, X_{2}, \ldots , X_{n}
X1,X2,…,Xn be independent Bernoulli random variables with
p
(
X
i
=
1
)
=
p
p(X_{i} = 1) = p
p(Xi=1)=p. Then
Y
=
X
1
+
X
2
+
⋯
+
X
n
Y = X_{1} + X_{2} +\cdots+ X_{n}
Y=X1+X2+⋯+Xn is a binomial random variable.
Besides, a Bernoulli random variable can also be considered as a special case of a binomial random variable with
n
=
1
n=1
n=1.
1. 5. 3 Geometric Distribution
The geometric distribution is also constructed from independent Bernoulli trials, but from an infinite sequence. On each trial, a success occurs with probability
p
p
p, and
X
X
X is the total number of trials up to and including the first success. So that
X
=
k
X = k
X=k, there must be
k
−
1
k-1
k−1 failures followed by a success. From the independence of the trials, this occurs with probability
p
(
k
)
=
P
(
X
=
k
)
=
(
1
−
p
)
k
−
1
p
,
k
=
1
,
2
,
3
,
…
p(k)=P(X=k)=(1-p)^{k-1}p,\qquad k=1,2,3,\ldots
p(k)=P(X=k)=(1−p)k−1p,k=1,2,3,…Note that these probabilities sum to 1:
∑
k
=
1
∞
(
1
−
p
)
k
−
1
p
=
p
∑
j
=
0
∞
(
1
−
p
)
j
=
1
\sum\limits_{k=1}^{\infty}(1-p)^{k-1}p=p\sum\limits_{j=0}^{\infty}(1-p)^{j}=1
k=1∑∞(1−p)k−1p=pj=0∑∞(1−p)j=1
1. 5. 4 Negative Binomial Distribution
The negative binomial distribution arises as a generalization of the geometric distribution. Suppose that a sequence of independent trials, each with probability of success
p
p
p, is performed until there are
r
r
r successes in all; let
X
X
X denote the total number of trials. To find
P
(
X
=
k
)
P(X = k)
P(X=k), we can argue in the following way: Any particular such sequence has probability
p
r
(
1
−
p
)
k
−
r
p^{r}(1-p)^{k-r}
pr(1−p)k−r, from the independence assumption. The last trial is a success, and the remaining
r
−
1
r-1
r−1 successes can be assigned to the remaining
k
−
1
k-1
k−1 trials
(
k
−
1
r
−
1
)
{k-1\choose r-1}
(r−1k−1) in ways. Thus,
P
(
X
=
k
)
=
(
k
−
1
r
−
1
)
p
r
(
1
−
p
)
k
−
r
P(X=k)={k-1\choose r-1}p^{r}(1-p)^{k-r}
P(X=k)=(r−1k−1)pr(1−p)k−rA negative binomial random variable can be expressed as the sum of
r
r
r independent geometric random variables: the number of trials up to and including the first success plus the number of trials after the first success up to and including the second success, . . . plus the number of trials from the
(
r
−
1
)
(r-1)
(r−1)st success up to and including the
r
r
rth success.
1. 5. 5 Hypergeometric Distribution
Suppose that an urn contains
n
n
n balls, of which
r
r
r are black and
n
−
r
n-r
n−r are white. Let
X
X
X denote the number of black balls drawn when taking
m
m
m balls without replacement.
P
(
X
=
k
)
=
(
r
k
)
(
n
−
r
m
−
k
)
(
n
m
)
P(X=k)=\frac{{r\choose k}{n-r\choose m-k}}{{n\choose m}}
P(X=k)=(mn)(kr)(m−kn−r)
X
X
X is a hypergeometric random variable with parameters
r
r
r,
n
n
n, and
m
m
m.
1. 5. 6 Poisson Distribution
The Poisson frequency function with parameter
λ
(
λ
>
0
)
\lambda (\lambda >0)
λ(λ>0) is
P
(
X
=
k
)
=
λ
k
k
!
e
−
λ
,
k
=
0
,
1
,
2
,
…
P(X=k)=\frac{\lambda^{k}}{k!}e^{-\lambda}, \qquad k=0,1,2,\ldots
P(X=k)=k!λke−λ,k=0,1,2,… Since
e
λ
=
∑
k
=
0
∞
λ
k
k
!
e^{\lambda} = \sum\nolimits_{k=0}^{\infty}\frac{\lambda^{k}}{k!}
eλ=∑k=0∞k!λk, it follows that the frequency function sums to 1. The shape varies as a function of
λ
\lambda
λ.
The Poisson distribution can be derived as the limit of a binomial distribution as the number of trials,
n
n
n, approaches infinity and the probability of success on each trial,
p
p
p, approaches zero in such a way that
n
p
=
λ
np = \lambda
np=λ. The binomial frequency function is
p
(
k
)
=
n
!
k
!
(
n
−
k
)
!
p
k
(
1
−
p
)
n
−
k
p(k)=\frac{n!}{k!(n-k)!}p^{k}(1-p)^{n-k}
p(k)=k!(n−k)!n!pk(1−p)n−kSetting
n
p
=
λ
np = \lambda
np=λ, this expression becomes
p
(
k
)
=
n
!
k
!
(
n
−
k
)
!
(
λ
n
)
k
(
1
−
λ
n
)
n
−
k
=
λ
k
k
!
n
!
(
n
−
k
)
!
1
n
k
(
1
−
λ
n
)
n
(
1
−
λ
n
)
−
k
\begin{aligned} p(k)=&\frac{n!}{k!(n-k)!}(\frac{\lambda}{n})^{k}(1-\frac{\lambda}{n})^{n-k} \\ =&\frac{\lambda^{k}}{k!}\frac{n!}{(n-k)!}\frac{1}{n^{k}}(1-\frac{\lambda}{n})^{n}(1-\frac{\lambda}{n})^{-k} \\ \end{aligned}
p(k)==k!(n−k)!n!(nλ)k(1−nλ)n−kk!λk(n−k)!n!nk1(1−nλ)n(1−nλ)−kAs
n
→
∞
λ
n
→
0
n
!
(
n
−
k
)
!
k
!
→
1
(
1
−
λ
n
)
n
→
e
−
λ
\begin{aligned} n\to&{\ }{\infty} \\ \frac{\lambda}{n}{}\to&{\ }{0} \\ \frac{n!}{(n-k)!k!}{}\to&{\ }{1} \\ (1-\frac{\lambda}{n})^{n}{}\to&{\ }{e^{-\lambda}} \end{aligned}
n→nλ→(n−k)!k!n!→(1−nλ)n→ ∞ 0 1 e−λand
(
1
−
λ
n
)
−
k
→
1
(1-\frac{\lambda}{n})^{-k}\to{1}
(1−nλ)−k→1We thus have
p
(
x
=
k
)
→
λ
k
e
−
λ
k
!
p(x=k)\to{\frac{\lambda^{k}e^{-\lambda}}{k!}}
p(x=k)→k!λke−λ
which is the Poisson frequency function.
The Poisson distribution often arises from a model called a Poisson process for the distribution of random events in a set S S S, which is typically one-, two-, or three dimensional, corresponding to time, a plane, or a volume of space. Basically, this model states that if S 1 , S 2 , … , S n S_{1}, S_{2}, \ldots , S_{n} S1,S2,…,Sn are disjoint subsets of S S S, then the numbers of events in these subsets, N 1 , N 2 , … , N n N_{1}, N_{2}, \ldots , N_{n} N1,N2,…,Nn, are independent random variables that follow Poisson distributions with parameters λ ∣ S 1 ∣ \lambda|S_{1}| λ∣S1∣, λ ∣ S 2 ∣ \lambda|S_{2}| λ∣S2∣, … \ldots … , λ ∣ S n ∣ \lambda|S_{n}| λ∣Sn∣, where λ ∣ S i ∣ \lambda|S_{i}| λ∣Si∣ denotes the measure of S i S_{i} Si (length, area, or volume, for example). The crucial assumptions here are that events in disjoint subsets are independent of each other and that the Poisson parameter for a subset is proportional to the subset’s size.
Keep Updating~
For personal review usage only! If you wanna forward this post for other usage, please ask the editor/author/organization listed at the beginning.