Multinomial distribution

From Wikipedia, the free encyclopedia
Multinomial
Parameters{\displaystyle n>0}n>0 number of trials (integer)
{\displaystyle p_{1},\ldots ,p_{k}}p_{1},\ldots ,p_{k} event probabilities ({\displaystyle \Sigma p_{i}=1}\Sigma p_{i}=1)
Support{\displaystyle X_{i}\in \{0,\dots ,n\}}X_i \in \{0,\dots,n\}
{\displaystyle \Sigma X_{i}=n\!}\Sigma X_i = n\!
pmf{\displaystyle {\frac {n!}{x_{1}!\cdots x_{k}!}}p_{1}^{x_{1}}\cdots p_{k}^{x_{k}}}\frac{n!}{x_1!\cdots x_k!} p_1^{x_1} \cdots p_k^{x_k}
Mean{\displaystyle E\{X_{i}\}=np_{i}}E\{X_i\} = np_i
Variance{\displaystyle \textstyle {\mathrm {Var} }(X_{i})=np_{i}(1-p_{i})}\textstyle{\mathrm{Var}}(X_i) = n p_i (1-p_i)
{\displaystyle \textstyle {\mathrm {Cov} }(X_{i},X_{j})=-np_{i}p_{j}~~(i\neq j)}\textstyle {\mathrm{Cov}}(X_i,X_j) = - n p_i p_j~~(i\neq j)
MGF{\displaystyle {\biggl (}\sum _{i=1}^{k}p_{i}e^{t_{i}}{\biggr )}^{n}}\biggl( \sum_{i=1}^k p_i e^{t_i} \biggr)^n
CF{\displaystyle \left(\sum _{j=1}^{k}p_{j}e^{it_{j}}\right)^{n}} \left(\sum_{j=1}^k p_je^{it_j}\right)^n where {\displaystyle i^{2}=-1}i^{2}=-1
PGF{\displaystyle {\biggl (}\sum _{i=1}^{k}p_{i}z_{i}{\biggr )}^{n}{\text{ for }}(z_{1},\ldots ,z_{k})\in \mathbb {C} ^{k}}\biggl( \sum_{i=1}^k p_i z_i \biggr)^n\text{ for }(z_1,\ldots,z_k)\in\mathbb{C}^k

In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for rolling a k-sided die n times. For n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.

When n is 1 and k is 2 the multinomial distribution is the Bernoulli distribution. When k is 2 and number of trials are more than 1 it is the binomial distribution. When n is 1 it is the categorical distribution.

The Bernoulli distribution is the probability distribution of whether a Bernoulli trial is a success. In other words, it models the number of heads from flipping a (possibly biased) coin one time. The binomial distribution generalizes this to the number of heads from doing n independent flips of the same coin. For the multinomial distribution the analog to the Bernoulli Distribution is the categorical distribution. Instead of flipping one coin, the categorical distribution models the roll of one k sided die. So the multinomial distribution can model n independent rolls of a k sided die.

Let k be a fixed finite number. Mathematically, we have k possible mutually exclusive outcomes, with corresponding probabilities p1, ..., pk, and n independent trials. Since the k outcomes are mutually exclusive and one must occur we have pi ≥ 0 for i = 1, ..., k and {\displaystyle \sum _{i=1}^{k}p_{i}=1}\sum_{i=1}^k p_i = 1. Then if the random variables Xi indicate the number of times outcome number i is observed over the n trials, the vectorX = (X1, ..., Xk) follows a multinomial distribution with parameters n and p, where p = (p1, ..., pk). While the trials are independent, their outcomes X are dependent because they must be summed to n.

Note that, in some fields, such as natural language processing, the categorical and multinomial distributions are conflated, and it is common to speak of a "multinomial distribution" when a categorical distribution is actually meant. This stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-K" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range {\displaystyle 1\dots K}1 \dots K; in this form, a categorical distribution is equivalent to a multinomial distribution over a single trial.

Specification[edit]

Probability mass function[edit]

Suppose one does an experiment of extracting n balls of k different colours from a bag, replacing the extracted ball after each draw. Balls from the same colour are equivalent. Denote the variable which is the number of extracted balls of colour i (i = 1, ..., k) asXi, and denote as pi the probability that a given extraction will be in colour i. The probability mass function of this multinomial distribution is:

{\displaystyle {\begin{aligned}f(x_{1},\ldots ,x_{k};n,p_{1},\ldots ,p_{k})&{}=\Pr(X_{1}=x_{1}{\text{ and }}\dots {\text{ and }}X_{k}=x_{k})\\&{}={\begin{cases}{\displaystyle {n! \over x_{1}!\cdots x_{k}!}p_{1}^{x_{1}}\cdots p_{k}^{x_{k}}},\quad &{\text{when }}\sum _{i=1}^{k}x_{i}=n\\\\0&{\text{otherwise,}}\end{cases}}\end{aligned}}}{\displaystyle {\begin{aligned}f(x_{1},\ldots ,x_{k};n,p_{1},\ldots ,p_{k})&{}=\Pr(X_{1}=x_{1}{\text{ and }}\dots {\text{ and }}X_{k}=x_{k})\\&{}={\begin{cases}{\displaystyle {n! \over x_{1}!\cdots x_{k}!}p_{1}^{x_{1}}\cdots p_{k}^{x_{k}}},\quad &{\text{when }}\sum _{i=1}^{k}x_{i}=n\\\\0&{\text{otherwise,}}\end{cases}}\end{aligned}}}

for non-negative integers x1, ..., xk.

The probability mass function can be expressed using the gamma function as:

{\displaystyle f(x_{1},\dots ,x_{k};p_{1},\ldots ,p_{k})={\frac {\Gamma (\sum _{i}x_{i}+1)}{\prod _{i}\Gamma (x_{i}+1)}}\prod _{i=1}^{k}p_{i}^{x_{i}}.}f(x_1,\dots, x_{k}; p_1,\ldots, p_k) = \frac{\Gamma(\sum_i x_i + 1)}{\prod_i \Gamma(x_i+1)} \prod_{i=1}^k p_i^{x_i}.

This form shows its resemblance to the Dirichlet distribution which is its conjugate prior.

Visualization[edit]

As slices of generalized Pascal's triangle[edit]

Just like one can interpret the binomial distribution as (normalized) 1D slices of Pascal's triangle, so too can one interpret the multinomial distribution as 2D (triangular) slices of Pascal's pyramid, or 3D/4D/+ (pyramid-shaped) slices of higher-dimensional analogs of Pascal's triangle. This reveals an interpretation of the range of the distribution: discretized equilaterial "pyramids" in arbitrary dimension—i.e. a simplex with a grid.

As polynomial coefficients[edit]

Similarly, just like one can interpret the binomial distribution as the polynomial coefficients of {\displaystyle (px_{1}+(1-p)x_{2})^{n}}(p x_1 + (1-p) x_2)^n when expanded, one can interpret the multinomial distribution as the coefficients of {\displaystyle (p_{1}x_{1}+p_{2}x_{2}+p_{3}x_{3}+\cdots +p_{k}x_{k})^{n}}{\displaystyle (p_{1}x_{1}+p_{2}x_{2}+p_{3}x_{3}+\cdots +p_{k}x_{k})^{n}} when expanded. (Note that just like the binomial distribution, the coefficients must sum to 1.) This is the origin of the name "multinomial distribution".

Properties[edit]

The expected number of times the outcome i was observed over n trials is

{\displaystyle \operatorname {E} (X_{i})=np_{i}.\,}\operatorname{E}(X_i) = n p_i.\,

The covariance matrix is as follows. Each diagonal entry is the variance of a binomially distributed random variable, and is therefore

{\displaystyle \operatorname {var} (X_{i})=np_{i}(1-p_{i}).\,}\operatorname{var}(X_i)=np_i(1-p_i).\,

The off-diagonal entries are the covariances:

{\displaystyle \operatorname {cov} (X_{i},X_{j})=-np_{i}p_{j}\,}\operatorname{cov}(X_i,X_j)=-np_i p_j\,

for ij distinct.

All covariances are negative because for fixed n, an increase in one component of a multinomial vector requires a decrease in another component.

This is a k × k positive-semidefinite matrix of rank k − 1. In the special case where k = n and where the pi are all equal, the covariance matrix is the centering matrix.

The entries of the corresponding correlation matrix are

{\displaystyle \rho (X_{i},X_{i})=1.}\rho(X_i,X_i) = 1.
{\displaystyle \rho (X_{i},X_{j})={\frac {\operatorname {cov} (X_{i},X_{j})}{\sqrt {\operatorname {var} (X_{i})\operatorname {var} (X_{j})}}}={\frac {-p_{i}p_{j}}{\sqrt {p_{i}(1-p_{i})p_{j}(1-p_{j})}}}=-{\sqrt {\frac {p_{i}p_{j}}{(1-p_{i})(1-p_{j})}}}.}\rho(X_i,X_j) = \frac{\operatorname{cov}(X_i,X_j)}{\sqrt{\operatorname{var}(X_i)\operatorname{var}(X_j)}} = \frac{-p_i  p_j}{\sqrt{p_i(1-p_i) p_j(1-p_j)}} = -\sqrt{\frac{p_i  p_j}{(1-p_i)(1-p_j)}}.

Note that the sample size drops out of this expression.

Each of the k components separately has a binomial distribution with parameters n and pi, for the appropriate value of the subscript i.

The support of the multinomial distribution is the set

{\displaystyle \{(n_{1},\dots ,n_{k})\in \mathbb {N} ^{k}\mid n_{1}+\cdots +n_{k}=n\}.\,}{\displaystyle \{(n_{1},\dots ,n_{k})\in \mathbb {N} ^{k}\mid n_{1}+\cdots +n_{k}=n\}.\,}

Its number of elements is

{\displaystyle {n+k-1 \choose k-1}.}{n+k-1 \choose k-1}.

Matrix notation[edit]

In matrix notation,

{\displaystyle \operatorname {E} ({\mathbf {X}})=n{\mathbf {p}},\,}{\displaystyle \operatorname {E} ({\mathbf {X}})=n{\mathbf {p}},\,}

and

{\displaystyle \operatorname {var} ({\mathbf {X}})=n\lbrace \operatorname {diag} ({\mathbf {p}})-{\mathbf {p}}{\mathbf {p}}^{\rm {T}}\rbrace ,\,}{\displaystyle \operatorname {var} ({\mathbf {X}})=n\lbrace \operatorname {diag} ({\mathbf {p}})-{\mathbf {p}}{\mathbf {p}}^{\rm {T}}\rbrace ,\,}

with pT = the row vector transpose of the column vector p.

Example[edit]

In a recent three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes. If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample?

Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is the multivariate hypergeometric distribution, but the distributions converge as the population grows large.

{\displaystyle \Pr(A=1,B=2,C=3)={\frac {6!}{1!2!3!}}(0.2^{1})(0.3^{2})(0.5^{3})=0.135} \Pr(A=1,B=2,C=3) = \frac{6!}{1! 2! 3!}(0.2^1) (0.3^2) (0.5^3) = 0.135

Sampling from a multinomial distribution[edit]

First, reorder the parameters {\displaystyle p_{1},\ldots ,p_{k}}p_{1},\ldots ,p_{k} such that they are sorted in descending order (this is only to speed up computation and not strictly necessary). Now, for each trial, draw an auxiliary variable X from a uniform (0, 1) distribution. The resulting outcome is the component

{\displaystyle j=\min \left\{j'\in \{1,\dots ,k\}:\left(\sum _{i=1}^{j'}p_{i}\right)-X\geq 0\right\}.}{\displaystyle j=\min \left\{j'\in \{1,\dots ,k\}:\left(\sum _{i=1}^{j'}p_{i}\right)-X\geq 0\right\}.}

{Xj = 1, Xk = 0 for k ≠ j } is one observation from the multinomial distribution with {\displaystyle p_{1},\ldots ,p_{k}}p_{1},\ldots ,p_{k} and n = 1. A sum of independent repetitions of this experiment is an observation from a multinomial distribution with n equal to the number of such repetitions.

To simulate a multinomial distribution[edit]

Various methods may be used to simulate a multinomial distribution. A very simple one is to use a random number generator to generate numbers between 0 and 1. First, we divide the interval from 0 to 1 in k subintervals equal in size to the probabilities of the kcategories. Then, we generate a random number for each of n trials and use a logical test to classify the virtual measure or observation in one of the categories.

Example

If we have :

Categories123456
Probabilities0.150.200.300.160.120.07
Superior limits of subintervals0.150.350.650.810.931.00

Then, with a software like Excel, we may use the following recipe:

Cells :AiBiCi...Gi
Formulae :Rand()=If($Ai<0.15;1;0)=If(And($Ai>=0.15;$Ai<0.35);1;0)...=If($Ai>=0.93;1;0)

After that, we will use functions such as SumIf to accumulate the observed results by category and to calculate the estimated covariance matrix for each simulated sample.

Another way is to use a discrete random number generator. In that case, the categories must be labeled or relabeled with numeric values.

In the two cases, the result is a multinomial distribution with k categories. This is equivalent, with a continuous random distribution, to simulate k independent standardized normal distributions, or a multinormal distribution N(0,I) having k components identically distributed and statistically independent.

Since the counts of all categories have to sum to the number of trials, the counts of the categories are always negatively correlated.[1]

Related distributions[edit]

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值