Anscombe’s quartet comprises of four datasets, and is rather famous. Why? You’ll find out in this exercise.
Anscombe’s quartet 数据
代码
import scipy.linalg as sl
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
import statsmodels.formula.api as smf
sns.set_context("talk")
anscombe = sns.load_dataset("anscombe")
print(anscombe)
运行结果
dataset x y
0 I 10.0 8.04
1 I 8.0 6.95
2 I 13.0 7.58
3 I 9.0 8.81
4 I 11.0 8.33
5 I 14.0 9.96
6 I 6.0 7.24
7 I 4.0 4.26
8 I 12.0 10.84
9 I 7.0 4.82
10 I 5.0 5.68
11 II 10.0 9.14
12 II 8.0 8.14
13 II 13.0 8.74
14 II 9.0 8.77
15 II 11.0 9.26
16 II 14.0 8.10
17 II 6.0 6.13
18 II 4.0 3.10
19 II 12.0 9.13
20 II 7.0 7.26
21 II 5.0 4.74
22 III 10.0 7.46
23 III 8.0 6.77
24 III 13.0 12.74
25 III 9.0 7.11
26 III 11.0 7.81
27 III 14.0 8.84
28 III 6.0 6.08
29 III 4.0 5.39
30 III 12.0 8.15
31 III 7.0 6.42
32 III 5.0 5.73
33 IV 8.0 6.58
34 IV 8.0 5.76
35 IV 8.0 7.71
36 IV 8.0 8.84
37 IV 8.0 8.47
38 IV 8.0 7.04
39 IV 8.0 5.25
40 IV 19.0 12.50
41 IV 8.0 5.56
42 IV 8.0 7.91
43 IV 8.0 6.89
Part 1
For each of the four datasets:
Compute the mean and variance of both x and y
Compute the correlation coefficient between x and y
Compute the linear regression line: y=β0+β1x+ϵ
(hint:use statsmodels and look at the Statsmodels notebook)
代码
print("mean:")
print(anscombe.groupby("dataset").mean())
print("variance:")
print(anscombe.groupby("dataset").var())
print("correlation coefficient:")
print(anscombe.groupby("dataset").x.corr(anscombe.y))
#linear regression line
setx = np.zeros((4,11))
setx[0] = anscombe[0:11].x
setx[1] = anscombe[11:22].x
setx[2] = anscombe[22:33].x
setx[3] = anscombe[33:44].x
sety = np.zeros((4,11))
sety[0] = anscombe[0:11].y
sety[1] = anscombe[11:22].y
sety[2] = anscombe[22:33].y
sety[3] = anscombe[33:44].y
for i in range(0,4):
Y = sety[i]
X = setx[i]
X = sm.add_constant(X)
model = sm.OLS(Y, X)
results = model.fit()
print("Linear regression line of dataset " + str(i+1))
print("y = "+str(results.params[0])+"+"+str(results.params[1])+"x")
运行结果
mean:
x y
dataset
I 9.0 7.500909
II 9.0 7.500909
III 9.0 7.500000
IV 9.0 7.500909
variance:
x y
dataset
I 11.0 4.127269
II 11.0 4.127629
III 11.0 4.122620
IV 11.0 4.123249
correlation coefficient:
dataset
I 0.816421
II 0.816237
III 0.816287
IV 0.816521
Linear regression line of dataset 1
y = 3.0000909090909085+0.5000909090909091x
Linear regression line of dataset 2
y = 3.0009090909090905+0.5x
Linear regression line of dataset 3
y = 3.0024545454545453+0.49972727272727285x
Linear regression line of dataset 4
y = 3.0017272727272752+0.49990909090909075x
Part 2
Using Seaborn, visualize all four datasets.
hint: use sns.FacetGrid combined with plt.scatter
代码
sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=anscombe,
col_wrap=2, ci=None, palette="muted", size=4,
scatter_kws={"s": 50, "alpha": 1})
plt.show()
运行结果