100天机器学习挑战汇总文章链接在这儿。
目录
Step 1:数据预处理
这一段参见Day1的内容,数据预处理。
import pandas as pd
df = pd.read_csv('studentscores.csv')
# print(df)
X = df.iloc[:, :1].values
# X = df.iloc[:, 0].values 不能是这句,因为这样得到的是1D的向量,而后面的regressor.fit函数必须是2D的输入
Y = df.iloc[:, 1].values
print(X)
# print(Y)
# from sklearn.preprocessing import Imputer
# imp = Imputer(missing_values='NaN', strategy='mean', axis=0, verbose=0, copy=True)
# imp.fit(X)
# X[:, 1:] = imp.transform(X)
# print(X)
from sklearn.cross_validation import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.20, random_state=0)
# print(X_train)
注意此处特征虽然只有学习时间这一个(一元线性回归),但是X必须是2D的(如下),否则后面的regressor.fit函数会出错。
[[ 2.5]
[ 5.1]
[ 3.2]
[ 8.5]
[ 3.5]
[ 1.5]
[ 9.2]
[ 5.5]
[ 8.3]
[ 2.7]
[ 7.7]
[ 5.9]
[ 4.5]
[ 3.3]
[ 1.1]
[ 8.9]
[ 2.5]
[ 1.9]
[ 6.1]
[ 7.4]
[ 2.7]
[ 4.8]
[ 3.8]
[ 6.9]
[ 7.8]]
Step 2:对训练集应用简单的线性回归模型
LinearRegression的指导页面在这儿,还可参考这篇文章,介绍的很详细。
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor = regressor.fit(X_train, Y_train)
print(regressor.coef_)
print(regressor.intercept_)
其中coef_
存放回归系数,intercept_
则存放截距。他们结果分别是:
[ 9.91065648]
2.01816004143
Step 3:预测结果
Y_pred = regressor.predict(X_test)
print(Y_pred)
打印的结果是:
[ 16.88414476 33.73226078 75.357018 26.79480124 60.49103328]
Step 4:绘图
import matplotlib.pyplot as plt
plt.scatter(X_train, Y_train, color = 'red')
plt.plot(X_train, regressor.predict(X_train), color ='blue')
plt.scatter(X_test, Y_test, color = 'red')
plt.plot(X_test, regressor.predict(X_test), color ='blue')
plt.show()
最后:全部代码
最后贴一下整个过程中全部的代码:
import pandas as pd
df = pd.read_csv('studentscores.csv')
# print(df)
X = df.iloc[:, :1].values
# X = df.iloc[:, 0].values 不能是这句,因为这样得到的是1D的向量,而后面的regressor.fit函数必须是2D的输入
Y = df.iloc[:, 1].values
# print(X)
# print(Y)
# from sklearn.preprocessing import Imputer
# imp = Imputer(missing_values='NaN', strategy='mean', axis=0, verbose=0, copy=True)
# imp.fit(X)
# X[:, 1:] = imp.transform(X)
# print(X)
from sklearn.cross_validation import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.20, random_state=0)
# print(X_train)
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor = regressor.fit(X_train, Y_train)
# print(regressor.coef_)
# print(regressor.intercept_)
Y_pred = regressor.predict(X_test)
# print(Y_pred)
import matplotlib.pyplot as plt
plt.scatter(X_train, Y_train, color = 'red')
plt.plot(X_train, regressor.predict(X_train), color ='blue')
plt.scatter(X_test, Y_test, color = 'red')
plt.plot(X_test, regressor.predict(X_test), color ='blue')
plt.show()