在知道心理统计学之前,我一直以为它和我们数学系的统计学是一个东西。通过早上的阅读学习中发现(是的 我就是学了半天小白),虽然它们背后的原理一样,但侧重点完全不同(我也没怎么正经学过数理统计),心理统计学 为 对统计数据 应用层面的分析。
此笔记主要为(非理科专业)以后分析数据时 对参数的解释 更方便的查阅。
参考资料《Discovering Statistics》
将随着其他笔记,持续补充并添加超链接。
一. 基础参数符号
1.1. Mathematical Operators
1.1.1. ∑ , ∏ \sum, \prod ∑,∏
-
∑ \sum ∑ : Sum of
- ∑ i = 0 N x i = x 0 + x 1 + ⋯ + x N ⏟ N − 0 \displaystyle\sum_{i=0}^{N}x_i =\underbrace{x_0 + x_1 + \cdots + x_N}_{N-0} i=0∑Nxi=N−0 x0+x1+⋯+xN
-
∏ \prod ∏ : Prod of
- ∏ i = 0 N x i = x 0 × x 1 × ⋯ × x N ⏟ N − 0 \displaystyle\prod_{i=0}^{N}x_i =\underbrace{x_0 \times x_1 \times \cdots \times x_N}_{N-0} i=0∏Nxi=N−0 x0×x1×⋯×xN
1.1.2. x , l n , X ˉ \sqrt{x}, ln, \bar{X} x,ln,Xˉ
- x \sqrt{x} x: square root of x x x
- l n ln ln : natural logarithm, ln ( x ) = log e ( x ) \ln(x) = \log_e(x) ln(x)=loge(x)
- X ˉ \bar{X} Xˉ : the mean of a sample
1.2. Greek symbols
1.2.1. α , β , β i , ϵ \alpha, \beta, \beta_i, \epsilon α,β,βi,ϵ
-
α
\alpha
α : Alpha probability of making a Type I error.
- Type I error corresponds to convicting an innocent defendant. (Wikipedia)
- β \beta β : Beta probability of making a Type II error.
H 0 H_0 H0 | Accepted H 0 H_0 H0 | Rejected H 0 H_0 H0 |
---|---|---|
H 0 H_0 H0 True | TRUE | Type I error |
H 0 H_0 H0 False | Type II error | TRUE |
-
β
i
\beta_i
βi : the standarized regression coefficient.
- eg. The population regression model
Y = β 0 + β 1 X 1 + β 2 X 2 + ϵ Y= \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \epsilon Y=β0+β1X1+β2X2+ϵ - b i b_i bi : the unstandarized regression coefficient
- eg. The population regression model
-
ϵ
\epsilon
ϵ : Epsilon, stands for “error”, also for sphericity.
- e i e_i ei : the error associated with i th term
1.2.2. μ , ρ , σ \mu, \rho, \sigma μ,ρ,σ
- μ \mu μ : Mu, mean of a popluation of scores
- ρ \rho ρ : Rho, correlation in the population; also for Spearman’s correlation coeffiecient
-
σ
\sigma
σ : standard deviation in a population of data
- σ 2 \sigma^2 σ2 : variance in a populaiton of data
- σ X ˉ \sigma_{\bar{X}} σXˉ : standard error of the mean same as S E D x ˉ SE_{D_{\bar{x}}} SEDxˉ
1.2.3. τ , ϕ , χ \tau, \phi, \chi τ,ϕ,χ
- τ \tau τ : Kendall’s tau (non-parametric correlation coefficient)
- ϕ \phi ϕ : Phi, measure of association between two categorical variables, also denote for the dispersion parameter in logistic regression.
-
χ
2
\chi^2
χ2 : Chi square, a test statistic that quantifies the association between two categorical variables.
- χ F 2 \chi^2_F χF2 : a test statistic in Friedman’s ANOVA, a non-parametric test of differences between related means
1.2.4. η , ω \eta, \omega η,ω
- η 2 \eta^2 η2 : Eta square, an effect size measure
- ω 2 \omega^2 ω2: Omega square, an effect size measure
1.3. Latin symbols
1.3.1 b , k , r , s , z b, k, r, s, z b,k,r,s,z
- b i b_i bi : the unstandarized regression coefficient
- k k k : The number of levels of a variable or the number of predictors in a regression model
-
r
r
r : Person’s correlation coefficient
- r s r_s rs : Spearman;s rank correlation coefficient
- r b r_b rb : Biserial correlation coefficient
- r p b r_{pb} rpb : Point-biserial correlation coefficient
-
s
s
s : The standard deviation of a sample of data
- s 2 s^2 s2 : The variance of a sample of data
- z z z : a data point expressed in standard deviation units
1.3.2 Sample N , n , n i N, n, n_i N,n,ni
- N N N : total sample size
- n , n i n, n_i n,ni : size of particular group
1.3.3. Statistic t , F , H , T , U , W t, F, H, T, U, W t,F,H,T,U,W
- t t t : Testicle statistic for a t-test
- F F F : F-statistic
- H H H : Kruskal-Wallis test statistic
- T T T : Wilcoxon’s matched-pairs signed-rank test
- U U U : Mann-Whitney test
- W s W_s Ws : Rilcoxon’s wank-sum test
1.3.4. Capital S E , M S , P , R , S S SE, MS, P, R, SS SE,MS,P,R,SS
- S E SE SE: Standard Error of Mean, same as σ X ˉ \sigma_{\bar{X}} σXˉ
- M S MS MS : The mean square error, the average variability in the data
-
P
P
P : Probability
- p p p : the probability value, p-value or significance ot a test
-
R
R
R : the multiple correlation coefficient
- R 2 R^2 R2 : the coefficient of determination (proportion of data explained by the model)
-
S
S
SS
SS : The sum of squares, or sum of square errors
- S S A SS_A SSA : The sum of squares for variable A
- S S M SS_M SSM : The model sum of squares (variability explained by the model fitted to the data)
- S S R SS_R SSR : The residual sum of suares (variability thate the model can’t explain - error in the model)
- S S T SS_T SST : The total sum of square (total variability within the data)
1.3.5. d f df df
- d f df df : Degree of freedom
1.4. Others
统计指标 | 统计量(样本) | 参数(总量) |
---|---|---|
平均数 Mean | x ˉ \bar{x} xˉ | μ \mu μ |
标准差 Standard Deviation | s s s | σ \sigma σ |
相关系数 Correlation | r r r | ρ \rho ρ |
回归系数 Regression | b b b | β \beta β |