----回归诊断----
#样本是否符合正态分布假设?
#是否存在离群值导致模型产生较大误差?
#线性模型是否合理?
#误差是否满足独立性,等方差,正态分布等假设条件?
#是否存在多重共线性?
----正态分布检验----
#正态性检验:函数shapiro.test()
#P>0.05,正态性分布
#数据来源
>num=seq(10378001,10378100);num
>x1=round(runif(100,min=80,max=100));x1#均匀分布产生
>x2=round(rnorm(100,mean=80,sd=7));x2
>x3=round(rnorm(100,mean=83,sd=18));x3
>x3[which(x3>100)]=100;x3
>x=data.frame(num,x1,x2,x3);x
>shapiro.test(x$x1)
Shapiro-Wilk normality test
data: x$x1
W = 0.94894, p-value = 0.0007066 #P值很小说明统计学意义明显,因此拒绝假设,说明x1不是正态分布产生的
>shapiro.test(x$x2)
Shapiro-Wilk normality test
data: x$x2
W = 0.97989, p-value = 0.1302 #P值不具备统计学意义,因此不能拒绝假设,不能说明x2不是正态分布
----散点图目测检验----
#薛毅 例6.11
----残差----
#残差计算函数residuals()
#对残差做正态性检验
#残差图
薛毅 例6.14 标准化残差图
----多重共线性----
#什么是多重共线性?
#多重共线性对回归模型的影响?--求逆矩阵会变得不稳定
#利用计算特征根发现多重共线性
#Kappa()函数
薛毅 例6.18
#读入数据
>collinear<-data.frame(
Y=c(10.006, 9.737, 15.087, 8.422, 8.625, 16.289,
5.958, 9.313, 12.960, 5.541, 8.756, 10.937),
X1=rep(c(8, 0, 2, 0), c(3, 3, 3, 3)),
X2=rep(c(1, 0, 7, 0), c(3, 3, 3, 3)),
X3=rep(c(1, 9, 0), c(3, 3, 6)),
X4=rep(c(1, 0, 1, 10), c(1, 2, 6, 3)),
X5=c(0.541, 0.130, 2.116, -2.397, -0.046, 0.365,
1.996, 0.228, 1.38, -0.798, 0.257, 0.440),
X6=c(-0.099, 0.070, 0.115, 0.252, 0.017, 1.504,
-0.865, -0.055, 0.502, -0.399, 0.101, 0.432))
#自变量的共线性分析
>XX<-cor(collinear[2:7])
#求k值
>kappa(XX,exact=TRUE)
[1] 2195.908 #k=2195.908>1000,认为有严重的多重共线性
#找出哪些变量是多重共线性的,因此需计算矩阵的特征值和相应的特征向量
>eigen(XX)
$values
[1] 2.428787365 1.546152096 0.922077664 0.793984690 0.307892134 0.001106051
$vectors
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] -0.3907189 0.33968212 0.67980398 -0.07990398 0.2510370 -0.447679719
[2,] -0.4556030 0.05392140 -0.70012501 -0.05768633 0.3444655 -0.421140280
[3,] 0.4826405 0.45332584 -0.16077736 -0.19102517 -0.4536372 -0.541689124
[4,] 0.1876590 -0.73546592 0.13587323 0.27645223 -0.0152087 -0.573371872
[5,] -0.4977330 0.09713874 -0.03185053 0.56356440 -0.6512834 -0.006052127
[6,] 0.3519499 0.35476494 -0.04864335 0.74817535 0.4337463 -0.002166594
得到:
λmin = 0.001106, ϕ = (0.4476, 0.4211, 0.5417, 0.5734, 0.006052, 0.002167)T
即,
0.4476x(1) + 0.4211x(2) + 0.5417x(3) + 0.5734x(4)+ 0.006052x(5) + 0.002167x(6) ≈ 0
由于x(6)和x(5)前的系数近似于0,因此将其去掉:
0.4476x(1) + 0.4211x(2) + 0.5417x(3) + 0.5734x(4) ≈ 0
所以存在,c0,c1,c2,c3,c4使得
c1X1 + c2X2 + c3X3 + c4X4 ≈ c0
因此说明X1,X2,X3,X4存在多重共线性
----广义线性模型----
薛毅 例6.19
#目标:求出电流强度与牛是否张嘴之间的关系
#困难:牛是否张嘴,是0-1变量,不是变量,无法建立线性回归模型
#矛盾转化:牛张嘴的概率是联续变量
>a=c(0;5)
>b=c(0,0.129,0.3,0.671,0.857,0.9)
>plot(a,b)#根据图片可以看出符合logistic回归模型的曲线特征
#广义线性模型建模函数:glm()
fm <- glm(formula, family = binomial(link = logit),data=data.frame)
#数据框形式输入数据
>norell<-data.frame(x=0:5, n=rep(70,6), success=c(0,9,21,47,60,63))
#构造矩阵
>norell$Ymat<-cbind(norell$success, norell$n-norell$success)
#建模
>glm.sol<-glm(Ymat~x, family=binomial, data=norell)
>summary(glm.sol)
Call:
glm(formula = Ymat ~ x, family = binomial, data = norell)
Deviance Residuals:
1 2 3 4 5 6
-2.2507 0.3892 -0.1466 1.1080 0.3234 -1.6679
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -3.3010 0.3238 -10.20 <2e-16 ***
x 1.2459 0.1119 11.13 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 250.4866 on 5 degrees of freedom
Residual deviance: 9.3526 on 4 degrees of freedom
AIC: 34.093
Number of Fisher Scoring iterations: 4
----非线性模型----
#例子:销售额×流通费率y
>x=c(1.5,2.8,4.5,7.5,10.5,13.5,15.1,16.5,19.5,22.5,24.5,26.5)
>y=c(7.0,5.5,4.6,3.6,2.9,2.7,2.5,2.4,2.2,2.1,1.9,1.8)
>plot(x,y)#非线性
----------------------------------
#直线回归(R²值不理想)
----------------------------------
>lm.1=lm(y~x)
>summary(lm.1)
Call:
lm(formula = y ~ x)
Residuals:
Min 1Q Median 3Q Max
-0.9179 -0.5537 -0.1628 0.3953 1.6519
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.60316 0.43474 12.889 1.49e-07 ***
x -0.17003 0.02719 -6.254 9.46e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.7701 on 10 degrees of freedom
Multiple R-squared: 0.7964, Adjusted R-squared: 0.776
F-statistic: 39.11 on 1 and 10 DF, p-value: 9.456e-05
---------------------------
#对数法,y=a+blogx
----------------------------
>lm.log=lm(y~log(x))
>summary(lm.log)
Call:
lm(formula = y ~ log(x))
Residuals:
Min 1Q Median 3Q Max
-0.33291 -0.10133 -0.04693 0.16512 0.34844
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.3639 0.1688 43.64 9.60e-13 ***
log(x) -1.7568 0.0677 -25.95 1.66e-10 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.2064 on 10 degrees of freedom
Multiple R-squared: 0.9854, Adjusted R-squared: 0.9839
F-statistic: 673.5 on 1 and 10 DF, p-value: 1.66e-10
>plot(x,y)
>lines(x,fitted(lm.log))y(lm.log)
--------------------------
#指数法,y=ae(bx)
--------------------------
>lm.exp=lm(log(y)~x)
>summary(lm.exp)
Call:
lm(formula = log(y) ~ x)
Residuals:
Min 1Q Median 3Q Max
-0.18246 -0.10664 -0.01670 0.08079 0.25946
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.759664 0.075101 23.43 4.54e-10 ***
x -0.048809 0.004697 -10.39 1.12e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.133 on 10 degrees of freedom
Multiple R-squared: 0.9153, Adjusted R-squared: 0.9068
F-statistic: 108 on 1 and 10 DF, p-value: 1.116e-06
>plot(x,y)
>lines(x,exp(fitted(lm.exp)))
---------------------------
#幂函数法,y=ax(b)
---------------------------
>lm.pow=lm(log(y)~log(x))
>summary(lm.pow)
Call:
lm(formula = log(y) ~ log(x))
Residuals:
Min 1Q Median 3Q Max
-0.054727 -0.020805 0.004548 0.024617 0.045896
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.19073 0.02951 74.23 4.81e-15 ***
log(x) -0.47243 0.01184 -39.90 2.34e-12 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.0361 on 10 degrees of freedom
Multiple R-squared: 0.9938, Adjusted R-squared: 0.9931
F-statistic: 1592 on 1 and 10 DF, p-value: 2.337e-12
>plot(x,y)
>lines(x,exp(fitted(lm.pow)))
**对比以上各种拟合回归过程得出结论是幂函数法为最佳
-----------------------
#多项式回归模型
-----------------------
#正交多项式回归
#缺点:容易导致过拟合情况,或者拟合不足的情况
----非线性最小二乘问题----
#nls()函数
薛毅 例子6.23