Selected Review from Mathematical Statistics

Selected Review from Mathematical Statistics

The Major section of this brief review list is modified or extracted based on the online material posted by Penn State University Eberly College of Science and notes from Mathematical Statistics and Data Analysis textbook by John A. Rice, University of California, Berkeley.

Note:
This post is for personal review only! If you wanna forward this post for other usage, please ask the editor/author/organization listed above.

1. Discrete Random Variable

1. 1 Definition:

A discrete random variable is a random variable that can take on only a finite or at most a countably infinite number of values.

1. 2 Probability mass function (pmf)

For discrete random variable, if there is a function p p p such that p ( x i ) = P ( X = x i ) p(x_{i}) = P(X=x_{i}) p(xi)=P(X=xi) and ∑ i p ( x i ) = 1 \sum\nolimits_{i}p(x_{i})=1 ip(xi)=1, then this function is called the probability mass function, or the frequency function of the random variable X X X.

1. 3 Cumulative distribution function (cdf)

Cumulative distribution function c d f cdf cdf of a random variable, which is defined to be
F ( x ) = P ( X ≤ x ) , − ∞ < x < ∞ F(x)=P(X \leq x), \qquad -\infty<x<\infty F(x)=P(Xx),<x<The cumulative distribution function is non-decreasing and satisfies:
lim ⁡ x → − ∞ F ( x ) = 0 a n d lim ⁡ x → ∞ F ( x ) = 1 \lim_{x\to-\infty}F(x)=0 \quad and \quad \lim_{x\to\infty}F(x)=1 xlimF(x)=0andxlimF(x)=1

1. 4 Independent Variable

In the case of two discrete random variables X X X and Y Y Y, taking on possible values x 1 , x 2 , … , x_{1}, x_{2}, \ldots , x1,x2,, and y 1 , y 2 , … y_{1}, y_{2}, \ldots y1,y2,, X X X and Y Y Y are said to be independent if, for all i i i and j j j:
P ( X = x i   a n d   Y = y j ) = P ( X = x i ) P ( Y = y j ) P(X=x_{i}\ and\ Y=y_{j} )=P(X=x_{i})P(Y=y_{j}) P(X=xi and Y=yj)=P(X=xi)P(Y=yj)The definition is extended to collections of more than two discrete random variablesin the obvious way; for example, X X X, Y Y Y, and Z Z Z are said to be mutually independent if, for all i i i, j j j, and k k k:
P ( X = x i ,   Y = y j ,   Z = z k ) = P ( X = x i ) P ( Y = y j ) P ( Z = z k ) P(X=x_{i},\ Y=y_{j},\ Z=z_{k} )=P(X=x_{i})P(Y=y_{j})P(Z=z_{k}) P(X=xi, Y=yj, Z=zk)=P(X=xi)P(Y=yj)P(Z=zk)

1. 5 Discrete Random Variable Distribution

1. 5. 1 Bernoulli Distribution

A Bernoulli random variable takes on only two values: 0 and 1, with probabilities $ 1-p $ and p p p, respectively. Its frequency function is thus
p ( 1 ) =   p p ( 0 ) =   1 − p p ( x ) =   0 ,   i f   x ≠ 0   a n d   x ≠ 1 \begin{aligned} p(1) = &\ p \\ p(0) = &\ 1-p \\ p(x) = &\ 0,\ if\ x \neq 0\ and\ x \neq 1 \end{aligned} p(1)=p(0)=p(x)= p 1p 0, if x=0 and x=1An alternative and sometimes useful representation of this function is
p ( x ) = { p x ( 1 − p ) 1 − x i f   x = 0   o r   x = 1 0 o t h e r w i s e p(x)= \left\{\begin{array}{lcl} p^{x}(1-p)^{1-x} & & {if\ x = 0\ or\ x = 1}\\ 0 & & {otherwise} \end{array}\right. p(x)={px(1p)1x0if x=0 or x=1otherwise If A A A is an event, then the indicator random variable, I A I_{A} IA, takes on the value 1 if A A A occurs and the value 0 if A A A does not occur:
I A ( ω ) = { 1 , i f   ω ∈ A 0 , o t h e r w i s e I_{A}(\omega) = \left\{\begin{array}{lcl} 1, & & {if\ \omega \in A}\\ 0, & & {otherwise} \end{array}\right. IA(ω)={1,0,if ωAotherwise I A I_{A} IA is a Bernoulli random variable. In applications, Bernoulli random variables often occur as indicators. A Bernoulli random variable might take on the value 1 or 0 according to whether a guess was a success or a failure.

1. 5. 2 Binomial Distribution

Suppose that n n n independent experiments, or trials, are performed, where n n n is a fixed number, and that each experiment results in a “success” with probability p p p and a “failure” with probability 1 − p 1-p 1p. The total number of successes, X X X, is a binomial random variable with parameters n n n and p p p.

The probability that X = k X = k X=k, or p ( k ) p(k) p(k), can be found in the following way: Any particular sequence of k successes occurs with probability p k ( 1 − p ) n − k p^{k}(1-p)^{n-k} pk(1p)nk , from the multiplication principle. The total number of such sequences is ( n k ) {n\choose k} (kn) , since there are ( n k ) {n\choose k} (kn) ways to assign k k k successes to n n n trials. P ( X = k ) P(X = k) P(X=k) is thus the probability of any particular sequence times the number of such sequences:
P ( X = k ) = ( n k ) p k ( 1 − p ) n − k P(X = k)={n\choose k}p^{k}(1-p)^{n-k} P(X=k)=(kn)pk(1p)nkA random variable with a binomial distribution can be expressed in terms of independent Bernoulli random variables.

Specifically, let X 1 , X 2 , … , X n X_{1}, X_{2}, \ldots , X_{n} X1,X2,,Xn be independent Bernoulli random variables with p ( X i = 1 ) = p p(X_{i} = 1) = p p(Xi=1)=p. Then Y = X 1 + X 2 + ⋯ + X n Y = X_{1} + X_{2} +\cdots+ X_{n} Y=X1+X2++Xn is a binomial random variable.
Besides, a Bernoulli random variable can also be considered as a special case of a binomial random variable with n = 1 n=1 n=1.

1. 5. 3 Geometric Distribution

The geometric distribution is also constructed from independent Bernoulli trials, but from an infinite sequence. On each trial, a success occurs with probability p p p, and X X X is the total number of trials up to and including the first success. So that X = k X = k X=k, there must be k − 1 k-1 k1 failures followed by a success. From the independence of the trials, this occurs with probability
p ( k ) = P ( X = k ) = ( 1 − p ) k − 1 p , k = 1 , 2 , 3 , … p(k)=P(X=k)=(1-p)^{k-1}p,\qquad k=1,2,3,\ldots p(k)=P(X=k)=(1p)k1p,k=1,2,3,Note that these probabilities sum to 1:
∑ k = 1 ∞ ( 1 − p ) k − 1 p = p ∑ j = 0 ∞ ( 1 − p ) j = 1 \sum\limits_{k=1}^{\infty}(1-p)^{k-1}p=p\sum\limits_{j=0}^{\infty}(1-p)^{j}=1 k=1(1p)k1p=pj=0(1p)j=1

1. 5. 4 Negative Binomial Distribution

The negative binomial distribution arises as a generalization of the geometric distribution. Suppose that a sequence of independent trials, each with probability of success p p p, is performed until there are r r r successes in all; let X X X denote the total number of trials. To find P ( X = k ) P(X = k) P(X=k), we can argue in the following way: Any particular such sequence has probability p r ( 1 − p ) k − r p^{r}(1-p)^{k-r} pr(1p)kr, from the independence assumption. The last trial is a success, and the remaining r − 1 r-1 r1 successes can be assigned to the remaining k − 1 k-1 k1 trials ( k − 1 r − 1 ) {k-1\choose r-1} (r1k1) in ways. Thus,
P ( X = k ) = ( k − 1 r − 1 ) p r ( 1 − p ) k − r P(X=k)={k-1\choose r-1}p^{r}(1-p)^{k-r} P(X=k)=(r1k1)pr(1p)krA negative binomial random variable can be expressed as the sum of r r r independent geometric random variables: the number of trials up to and including the first success plus the number of trials after the first success up to and including the second success, . . . plus the number of trials from the ( r − 1 ) (r-1) (r1)st success up to and including the r r rth success.

1. 5. 5 Hypergeometric Distribution

Suppose that an urn contains n n n balls, of which r r r are black and n − r n-r nr are white. Let X X X denote the number of black balls drawn when taking m m m balls without replacement.
P ( X = k ) = ( r k ) ( n − r m − k ) ( n m ) P(X=k)=\frac{{r\choose k}{n-r\choose m-k}}{{n\choose m}} P(X=k)=(mn)(kr)(mknr) X X X is a hypergeometric random variable with parameters r r r, n n n, and m m m.

1. 5. 6 Poisson Distribution

The Poisson frequency function with parameter λ ( λ > 0 ) \lambda (\lambda >0) λ(λ>0) is
P ( X = k ) = λ k k ! e − λ , k = 0 , 1 , 2 , … P(X=k)=\frac{\lambda^{k}}{k!}e^{-\lambda}, \qquad k=0,1,2,\ldots P(X=k)=k!λkeλ,k=0,1,2, Since e λ = ∑ k = 0 ∞ λ k k ! e^{\lambda} = \sum\nolimits_{k=0}^{\infty}\frac{\lambda^{k}}{k!} eλ=k=0k!λk, it follows that the frequency function sums to 1. The shape varies as a function of λ \lambda λ.
The Poisson distribution can be derived as the limit of a binomial distribution as the number of trials, n n n, approaches infinity and the probability of success on each trial, p p p, approaches zero in such a way that n p = λ np = \lambda np=λ. The binomial frequency function is
p ( k ) = n ! k ! ( n − k ) ! p k ( 1 − p ) n − k p(k)=\frac{n!}{k!(n-k)!}p^{k}(1-p)^{n-k} p(k)=k!(nk)!n!pk(1p)nkSetting n p = λ np = \lambda np=λ, this expression becomes
p ( k ) = n ! k ! ( n − k ) ! ( λ n ) k ( 1 − λ n ) n − k = λ k k ! n ! ( n − k ) ! 1 n k ( 1 − λ n ) n ( 1 − λ n ) − k \begin{aligned} p(k)=&\frac{n!}{k!(n-k)!}(\frac{\lambda}{n})^{k}(1-\frac{\lambda}{n})^{n-k} \\ =&\frac{\lambda^{k}}{k!}\frac{n!}{(n-k)!}\frac{1}{n^{k}}(1-\frac{\lambda}{n})^{n}(1-\frac{\lambda}{n})^{-k} \\ \end{aligned} p(k)==k!(nk)!n!(nλ)k(1nλ)nkk!λk(nk)!n!nk1(1nλ)n(1nλ)kAs n →   ∞ λ n →   0 n ! ( n − k ) ! k ! →   1 ( 1 − λ n ) n →   e − λ \begin{aligned} n\to&{\ }{\infty} \\ \frac{\lambda}{n}{}\to&{\ }{0} \\ \frac{n!}{(n-k)!k!}{}\to&{\ }{1} \\ (1-\frac{\lambda}{n})^{n}{}\to&{\ }{e^{-\lambda}} \end{aligned} nnλ(nk)!k!n!(1nλ)n  0 1 eλand ( 1 − λ n ) − k → 1 (1-\frac{\lambda}{n})^{-k}\to{1} (1nλ)k1We thus have p ( x = k ) → λ k e − λ k ! p(x=k)\to{\frac{\lambda^{k}e^{-\lambda}}{k!}} p(x=k)k!λkeλ
which is the Poisson frequency function.

The Poisson distribution often arises from a model called a Poisson process for the distribution of random events in a set S S S, which is typically one-, two-, or three dimensional, corresponding to time, a plane, or a volume of space. Basically, this model states that if S 1 , S 2 , … , S n S_{1}, S_{2}, \ldots , S_{n} S1,S2,,Sn are disjoint subsets of S S S, then the numbers of events in these subsets, N 1 , N 2 , … , N n N_{1}, N_{2}, \ldots , N_{n} N1,N2,,Nn, are independent random variables that follow Poisson distributions with parameters λ ∣ S 1 ∣ \lambda|S_{1}| λS1, λ ∣ S 2 ∣ \lambda|S_{2}| λS2, … \ldots , λ ∣ S n ∣ \lambda|S_{n}| λSn, where λ ∣ S i ∣ \lambda|S_{i}| λSi denotes the measure of S i S_{i} Si (length, area, or volume, for example). The crucial assumptions here are that events in disjoint subsets are independent of each other and that the Poisson parameter for a subset is proportional to the subset’s size.

Keep Updating~
For personal review usage only! If you wanna forward this post for other usage, please ask the editor/author/organization listed at the beginning.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Go语言(也称为Golang)是由Google开发的一种静态强类型、编译型的编程语言。它旨在成为一门简单、高效、安全和并发的编程语言,特别适用于构建高性能的服务器和分布式系统。以下是Go语言的一些主要特点和优势: 简洁性:Go语言的语法简单直观,易于学习和使用。它避免了复杂的语法特性,如继承、重载等,转而采用组合和接口来实现代码的复用和扩展。 高性能:Go语言具有出色的性能,可以媲美C和C++。它使用静态类型系统和编译型语言的优势,能够生成高效的机器码。 并发性:Go语言内置了对并发的支持,通过轻量级的goroutine和channel机制,可以轻松实现并发编程。这使得Go语言在构建高性能的服务器和分布式系统时具有天然的优势。 安全性:Go语言具有强大的类型系统和内存管理机制,能够减少运行时错误和内存泄漏等问题。它还支持编译时检查,可以在编译阶段就发现潜在的问题。 标准库:Go语言的标准库非常丰富,包含了大量的实用功能和工具,如网络编程、文件操作、加密解密等。这使得开发者可以更加专注于业务逻辑的实现,而无需花费太多时间在底层功能的实现上。 跨平台:Go语言支持多种操作系统和平台,包括Windows、Linux、macOS等。它使用统一的构建系统(如Go Modules),可以轻松地跨平台编译和运行代码。 开源和社区支持:Go语言是开源的,具有庞大的社区支持和丰富的资源。开发者可以通过社区获取帮助、分享经验和学习资料。 总之,Go语言是一种简单、高效、安全、并发的编程语言,特别适用于构建高性能的服务器和分布式系统。如果你正在寻找一种易于学习和使用的编程语言,并且需要处理大量的并发请求和数据,那么Go语言可能是一个不错的选择。
Go语言(也称为Golang)是由Google开发的一种静态强类型、编译型的编程语言。它旨在成为一门简单、高效、安全和并发的编程语言,特别适用于构建高性能的服务器和分布式系统。以下是Go语言的一些主要特点和优势: 简洁性:Go语言的语法简单直观,易于学习和使用。它避免了复杂的语法特性,如继承、重载等,转而采用组合和接口来实现代码的复用和扩展。 高性能:Go语言具有出色的性能,可以媲美C和C++。它使用静态类型系统和编译型语言的优势,能够生成高效的机器码。 并发性:Go语言内置了对并发的支持,通过轻量级的goroutine和channel机制,可以轻松实现并发编程。这使得Go语言在构建高性能的服务器和分布式系统时具有天然的优势。 安全性:Go语言具有强大的类型系统和内存管理机制,能够减少运行时错误和内存泄漏等问题。它还支持编译时检查,可以在编译阶段就发现潜在的问题。 标准库:Go语言的标准库非常丰富,包含了大量的实用功能和工具,如网络编程、文件操作、加密解密等。这使得开发者可以更加专注于业务逻辑的实现,而无需花费太多时间在底层功能的实现上。 跨平台:Go语言支持多种操作系统和平台,包括Windows、Linux、macOS等。它使用统一的构建系统(如Go Modules),可以轻松地跨平台编译和运行代码。 开源和社区支持:Go语言是开源的,具有庞大的社区支持和丰富的资源。开发者可以通过社区获取帮助、分享经验和学习资料。 总之,Go语言是一种简单、高效、安全、并发的编程语言,特别适用于构建高性能的服务器和分布式系统。如果你正在寻找一种易于学习和使用的编程语言,并且需要处理大量的并发请求和数据,那么Go语言可能是一个不错的选择。
Go语言(也称为Golang)是由Google开发的一种静态强类型、编译型的编程语言。它旨在成为一门简单、高效、安全和并发的编程语言,特别适用于构建高性能的服务器和分布式系统。以下是Go语言的一些主要特点和优势: 简洁性:Go语言的语法简单直观,易于学习和使用。它避免了复杂的语法特性,如继承、重载等,转而采用组合和接口来实现代码的复用和扩展。 高性能:Go语言具有出色的性能,可以媲美C和C++。它使用静态类型系统和编译型语言的优势,能够生成高效的机器码。 并发性:Go语言内置了对并发的支持,通过轻量级的goroutine和channel机制,可以轻松实现并发编程。这使得Go语言在构建高性能的服务器和分布式系统时具有天然的优势。 安全性:Go语言具有强大的类型系统和内存管理机制,能够减少运行时错误和内存泄漏等问题。它还支持编译时检查,可以在编译阶段就发现潜在的问题。 标准库:Go语言的标准库非常丰富,包含了大量的实用功能和工具,如网络编程、文件操作、加密解密等。这使得开发者可以更加专注于业务逻辑的实现,而无需花费太多时间在底层功能的实现上。 跨平台:Go语言支持多种操作系统和平台,包括Windows、Linux、macOS等。它使用统一的构建系统(如Go Modules),可以轻松地跨平台编译和运行代码。 开源和社区支持:Go语言是开源的,具有庞大的社区支持和丰富的资源。开发者可以通过社区获取帮助、分享经验和学习资料。 总之,Go语言是一种简单、高效、安全、并发的编程语言,特别适用于构建高性能的服务器和分布式系统。如果你正在寻找一种易于学习和使用的编程语言,并且需要处理大量的并发请求和数据,那么Go语言可能是一个不错的选择。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值