Statistical Formulas For Programmers

Statistical Formulas For Programmers

By Evan Miller

DRAFT: May 19, 2013

Being able to apply statistics is like having a secret superpower.

Where most people see averages, you see confidence intervals.

When someone says “7 is greater than 5,” you declare that they're really the same.

In a cacophony of noise, you hear a cry for help.

Unfortunately, not enough programmers have this superpower. That's a shame, because the application of statistics can almost always enhance the display and interpretation of data.

As my modest contribution to developer-kind, I've collected together the statistical formulas that I find to be most useful; this page presents them all in one place, a sort of statistical cheat-sheet for the practicing programmer.

Most of these formulas can be found in Wikipedia, but others are buried in journal articles or in professors' web pages. They are all classical (not Bayesian), and to motivate them I have added concise commentary. I've also added links and references, so that even if you're unfamiliar with the underlying concepts, you can go out and learn more. Wearing a red cape is optional.

Send suggestions and corrections to emmiller@gmail.com


Table of Contents

  1. Formulas For Reporting Averages
    1. Corrected Standard Deviation
    2. Standard Error of the Mean
    3. Confidence Interval Around the Mean
    4. Two-Sample T-Test
  2. Formulas For Reporting Proportions
    1. Confidence Interval of a Bernoulli Parameter
    2. Multinomial Confidence Intervals
    3. Chi-Squared Test
  3. Formulas For Reporting Count Data
    1. Standard Deviation of a Poisson Distribution
    2. Confidence Interval Around the Poisson Parameter
    3. Conditional Test of Two Poisson Parameters
  4. Formulas For Comparing Distributions
    1. Comparing an Empirical Distribution to a Known Distribution
    2. Comparing Two Empirical Distributions
    3. Comparing Three or More Empirical Distributions
  5. Formulas For Drawing a Trend Line
    1. Slope of a Best-Fit Trend Line
    2. Standard Error of the Slope
    3. Confidence Interval Around the Slope

1. Formulas For Reporting Averages

One of the first programming lessons in any language is to compute an average. But rarely does anyone stop to ask: what does the average actually tell us about the underlying data?

1.1 Corrected Standard Deviation

The standard deviation is a single number that reflects how spread out the data actually is. It should be reported alongside the average (unless the user will be confused).

s=1N1i=1N(xix¯)2

Where:

  • N  is the number of observations
  • xi  is the value of the  i th observation
  • x¯  is the average value of  xi

Reference: Standard deviation (Wikipedia)

1.2 Standard Error of the Mean

From a statistical point of view, the "average" is really just an estimate of an underlying population mean. That estimate has uncertainty that is summarized by the standard error.

SE=sN

Reference: Standard error (Wikipedia)

1.3 Confidence Interval Around the Mean

A confidence interval reflects the set of statistical hypotheses that won't be rejected at a given significance level. So the confidence interval around the mean reflects all possible values of the mean that can't be rejected by the data. It is a multiple of the standard error added to and subtracted from the mean.

CI=x¯±tα/2SE

Where:

  • α  is the significance level, typically 5% (one minus the confidence level)
  • tα/2  is the  1α/2  quantile of a t-distribution with  N1  degrees of freedom

Reference: Confidence interval (Wikipedia)

1.4 Two-Sample T-Test

A two-sample t-test can tell whether two groups of observations differ in their mean.

The test statistic is given by:

t=x1¯x2¯s21/n1+s22/n2

The hypothesis of equal means is rejected if  |t|  exceeds the  (1α/2)  quantile of a t distribution with degrees of freedom equal to:

df=(s21/n1+s22/n2)2(s21/n1)2/(n11)+(s22/n2)2/(n21)

You can see a demonstration of these concepts in Evan's Awesome Two-Sample T-Test.

Reference: Student's t-test (Wikipedia)


2. Formulas For Reporting Proportions

It's common to report the relative proportions of binary outcomes or categorical data, but in general these are meaningless without confidence intervals and tests of independence.

2.1 Confidence Interval of a Bernoulli Parameter

A Bernoulli parameter is the proportion underlying a binary-outcome event (for example, the percent of the time a coin comes up heads). The confidence interval is given by:

CI=p+z2α/22N±zα/2[p(1p)+z2α/2/4N]/N/(1+z2α/2)

Where:

  • p  is the observed proportion of interest
  • zα/2  is the  (1α/2)  quantile of a normal distribution

This formula can also be used as a sorting criterion.

Reference: Binomial proportion confidence interval (Wikipedia)

2.2 Multinomial Confidence Intervals

If you have more than two categories, a multinomial confidence interval supplies upper and lower confidence limits on all of the category proportions at once. The formula is nearly identical to the preceding one.

CI=pj+z2α/22N±zα/2[pj(1pj)+z2α/2/4N]/N/(1+z2α/2)

Where:

  • pj  is the observed proportion of the  j th category

Reference: Confidence Intervals for Multinomial Proportions

2.3 Chi-Squared Test

Pearson's chi-squared test can detect whether the distribution of row counts seem to differ across columns (or vice versa). It is useful when comparing two or more sets of category proportions.

The test statistic, called  X2 , is computed as:

X2=i=1nj=1m(Oi,jEi,j)2Ei,j

Where:

  • n  is the number of rows
  • m  is the number of columns
  • Oi,j  is the observed count in row  i  and column  j
  • Ei,j  is the expected count in row  i  and column  j

The expected count is given by:

Ei,j=nk=1Ok,jml=1Oi,lN

A statistical dependence exists if  X2  is greater than the ( 1α ) quantile of a  χ2  distribution with  (m1)×(n1)  degrees of freedom.

You can see a 2x2 demonstration of these concepts in Evan's Awesome Chi-Squared Test.

Reference: Pearson's chi-squared test (Wikipedia)


3. Formulas For Reporting Count Data

If the incoming events are independent, their counts are well-described by a Poisson distribution. A Poisson distribution takes a parameter  λ , which is the distribution's mean — that is, the average arrival rate of events per unit time.

3.1. Standard Deviation of a Poisson Distribution

The standard deviation of Poisson data usually doesn't need to be explicitly calculated. Instead it can be inferred from the Poisson parameter:

σ=λ

This fact can be used to read an unlabeled sales chart, for example.

Reference: Poisson distribution (Wikipedia)

3.2. Confidence Interval Around the Poisson Parameter

The confidence interval around the Poisson parameter represents the set of arrival rates that can't be rejected by the data. It can be inferred from a single data point of  c  events observed over  t  time periods with the following formula:

CI=(γ1(α/2,c)t,γ1(1α/2,c+1)t)

Where:

  • γ1(p,c)  is the inverse of the lower incomplete gamma function

Reference: Confidence Intervals for the Mean of a Poisson Distribution

3.3. Conditional Test of Two Poisson Parameters

Please never do this:

From a statistical point of view, 5 events is indistinguishable from 7 events. Before reporting in bright red text that one count is greater than another, it's best to perform a test of the two Poisson means.

The p-value is given by:

p=2×c!tc×mini=0c1ti1tci2i!(ci)!,i=c1cti1tci2i!(ci)!

Where:

  • Observation 1 consists of  c1  events over  t1  time periods
  • Observation 2 consists of  c2  events over  t2  time periods
  • c=c1+c2  and  t=t1+t2

You can see a demonstration of these concepts in Evan's Awesome Poisson Means Test.

Reference: A more powerful test for comparing two Poisson means (PDF)


4. Formulas For Comparing Distributions

If you want to test whether groups of observations come from the same (unknown) distribution, or if a single group of observations comes from a known distribution, you'll need a Kolmogorov-Smirnov test. A K-S test will test the entire distribution for equality, not just the distribution mean.

4.1. Comparing An Empirical Distribution to a Known Distribution

The simplest version is a one-sample K-S test, which compares a sample of  n  points having an observed cumulative distribution function  F  to a known distribution function having a c.d.f. of  G . The test statistic is:

Dn=supx|F(x)G(x)|

In plain English,  Dn  is the absolute value of the largest difference in the two c.d.f.s for any value of  x .

The critical value of  Dn  is given by  Kα/n , where  Kα  is the value of  x  that solves:

1α=2πxk=1exp((2k1)2π2/(8x2))

The critical must be solved iteratively, e.g. by Newton's method. If only the p-value is needed, it can be computed directly by solving the above for  α .

Reference: Kolmogorov-Smirnov Test (Wikipedia)

4.2. Comparing Two Empirical Distributions

The two-sample version is similar, except the test statistic is given by:

Dn1,n2=supx|F1(x)F2(x)|

Where  F1  and  F2  are the empirical c.d.f.s of the two samples, having  n1  and  n2  observations, respectively. The critical value of the test statistic is  Kα/n1n2/(n1+n2)  with the same value of  Kα above.

Reference: Kolmogorov-Smirnov Test (Wikipedia)

4.3. Comparing Three or More Empirical Distributions

k -sample extension of Kolmogorov-Smirnov was described by J. Kiefer in a 1959 paper. The test statistic is:

T=supxj=1knj|Fj(x)F¯(x)|

Where  F¯  is the c.d.f. of the combined samples. The critical value of  T  is  a2  where  a  solves:

1α=4Γ(h2)2h/2ahn=1(γ(h2)/2,n)h2exp[(γ(h2)/2,n)2/2a2][Jh/2(γ(h2)/2,n)]2

Where:

  • h=k1
  • Jh/2  is a Bessel function of the first kind with order  h/2
  • γ(h2)/2,n  is the  n th zero of  J(h2)/2

To compute the critical value, this equation must also be solved iteratively. When  k=2 , the equation reduces to a two-sample Kolmogorov-Smirnov test. The case of  k=4  can also be reduced to a simpler form, but for other values of  k , the equation cannot be reduced.

Reference: K-sample analogues of the Kolmogorov-Smirnov and Cramer-v. Mises tests (JSTOR)


5. Formulas For Drawing a Trend Line

Trend lines (or best-fit lines) can be used to establish a relationship between two variables and predict future values.

5.1. Slope of a Best-Fit Line

The slope of a best-fit (least squares) line is:

m=Ni=1(xix¯)(yiy¯)Ni=1(xix¯)2

Where:

  • {x1,,xN}  is the independent variable with sample mean  x¯
  • {y1,,yN}  is the dependent variable with sample mean  y¯

5.2. Standard Error of the Slope

The standard error around the estimated slope is:

SE=Ni=1(yiy¯m(xix¯))2/(N2)Ni=1(xix¯)2

5.3. Confidence Interval Around the Slope

The confidence interval is constructed as:

CI=m±tα/2SE

Where:

  • α  is the significance level, typically 5% (one minus the confidence level)
  • tα/2  is the  1α/2  quantile of a t-distribution with  N2  degrees of freedom

Reference: Simple linear regression (Wikipedia)


If you own a Mac, check out my desktop statistics software:


Wizard
Statistical analyzer

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值