Task描述:
现有两个数据集:train set 和 test set,train set 是气象站每个月前20天空气质量所有资料,test set 则是从剩下的资料中取样出来。
train.csv:每个月前20天每个小时的气象资料(每小时有18中测资),共12个月;
test.csv:从剩下的资料当中取样出连续的10小时为一笔,前9小时的所有资料数据当做feature,第10小时的PM2.5当做需要预测的值。一共取出240笔不重复的test data,请根据feature预测这240笔的PM2.5。
首先,看一下训练数据和测试数据:
train.csv
test.csv
1.读取数据
这是我第一次处理这种任务,在如何读取,处理csv文件上花了不少功夫。
逗号分隔值(Comma-Separated Values,CSV,有时也称为字符分隔值,因为分隔字符也可以不是逗号),其文件以纯文本形式存储表格数据(数字和文本)。纯文本意味着该文件是一个字符序列,不含必须像二进制数字那样被解读的数据。CSV文件由任意数目的记录组成,记录间以某种换行符分隔;每条记录由字段组成,字段间的分隔符是其它字符或字符串,最常见的是逗号或制表符。通常,所有记录都有完全相同的字段序列。
这里使用pandas库进行读取,具体可参考:http://pandas.pydata.org/pandasdocs/stable/reference/api/pandas.read_csv.html
path = "./week1/"
train = pd.read_csv(path + 'train.csv', engine='python', encoding='gbk')
test = pd.read_csv(path + 'test(1).csv', engine='python', encoding='gbk')
2.进行数据处理
由于原始的文件中包含了18种测资,而我们只需要预测PM2.5的值,所以从中提取出包含这一项的资料。并且原始文件包含了表头等项,都需要处理得到只包含需要特征的训练数据和测试数据。
这里需要注意的几个点:
1)由于测试数据是给出了前9个小时的数据来预测第10个小时的数据,我们可以使用将每连续的9个小时作为一组特征,第10个小时作为label,从而原始数据中就可以造出15组这样的特征+label;
2)将特征进行归一化。
train = train[train['observation'] == 'PM2.5']
test = test[test['AMB_TEMP'] == 'PM2.5']
train = train.drop(['Date', 'stations', 'observation'], axis=1)
test_x = test.iloc[:, 2:]
# 使用训练集构成一个大的训练集
# 每连续的9列作为一组特征,后面的一列作为label,
# 原始数据就可以构造出15组这样的特征+label,最后拼接起来就是3600*9
train_x = []
train_y = []
for i in range(15):
x = train.iloc[:, i:i + 9]
x.columns = np.array(
range(9)) # notice if we don't set columns name, it will have different columns name in each iteration
y = train.iloc[:, i + 9]
y.columns = np.array(range(1))
train_x.append(x)
train_y.append(y)
# 拼接
train_x = pd.concat(train_x)
train_y = pd.concat(train_y)
#
train_y = np.array(train_y, float)
test_x = np.array(test_x, float)
print(train_x.shape, train_y.shape)
# 数据归一化,若不归一化,数据收敛特别慢
ss = StandardScaler()
ss.fit(train_x)
train_x = ss.transform(train_x)
ss.fit(test_x)
test_x = ss.transform(test_x)
3.定义LR函数
# 定义一个评价函数
def r2_score(y_true, y_predict):
"""计算y_true和y_predict之间的MSE"""
MSE = np.sum((y_true - y_predict) ** 2) / len(y_true)
"""计算y_true和y_predict之间的R Square"""
return 1 - MSE / np.var(y_true)
# 线性回归,【参考某大佬的代码】
class LinearRegression:
def __init__(self):
"""初始化Linear Regression模型"""
self.coef_ = None
self.intercept_ = None
self._theta = None
def fit_normal(self, X_train, y_train):
"""根据训练数据集X_train, y_train训练Linear Regression模型"""
assert X_train.shape[0] == y_train.shape[0], \
"the size of X_train must be equal to the size of y_train"
X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
self._theta = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y_train)
self.intercept_ = self._theta[0]
self.coef_ = self._theta[1:]
return self
def fit_gd(self, X_train, y_train, eta=0.01, n_iters=1e4):
'''
:param X_train: 训练集
:param y_train: label
:param eta: 学习率
:param n_iters: 迭代次数
:return: theta
'''
"""根据训练数据集X_train, y_train, 使用梯度下降法训练Linear Regression模型"""
assert X_train.shape[0] == y_train.shape[0], \
"the size of X_train must be equal to the size of y_train"
'''定义一个损失函数'''
def J(theta, X_b, y):
try:
return np.sum((y - X_b.dot(theta)) ** 2) / len(y)
except:
return float('inf')
'''对损失函数求导'''
def dJ(theta, X_b, y):
return X_b.T.dot(X_b.dot(theta) - y) * 2. / len(y)
def gradient_descent(X_b, y, initial_theta, eta, n_iters=1e4, epsilon=1e-8):
'''
:param X_b:
:param y: lebel
:param initial_theta: 初始theta值
:param eta: 学习率
:param n_iters: 迭代次数
:param epsilon: theta更新变化值
:return:
'''
theta = initial_theta
cur_iter = 0
while cur_iter < n_iters:
gradient = dJ(theta, X_b, y)
last_theta = theta
theta = theta - eta * gradient
if (abs(J(theta, X_b, y) - J(last_theta, X_b, y)) < epsilon):
break
cur_iter += 1
return theta
X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
initial_theta = np.zeros(X_b.shape[1])
self._theta = gradient_descent(X_b, y_train, initial_theta, eta, n_iters)
self.intercept_ = self._theta[0]
self.coef_ = self._theta[1:]
return self
def predict(self, X_predict):
"""给定待预测数据集X_predict,返回表示X_predict的结果向量"""
assert self.intercept_ is not None and self.coef_ is not None, \
"must fit before predict!"
assert X_predict.shape[1] == len(self.coef_), \
"the feature number of X_predict must be equal to X_train"
X_b = np.hstack([np.ones((len(X_predict), 1)), X_predict])
return X_b.dot(self._theta)
def score(self, X_test, y_test):
"""根据测试数据集 X_test 和 y_test 确定当前模型的准确度"""
y_predict = self.predict(X_test)
return r2_score(y_test, y_predict)
def __repr__(self):
return "LR()"
4.运行与保存
LR = LinearRegression().fit_gd(train_x, train_y)
print(LR.score(train_x, train_y))
result = LR.predict(test_x)
# 保存结果
sampleSubmission = pd.read_csv(path + 'sampleSubmission.csv', engine='python', encoding='gbk')
sampleSubmission['value'] = result
sampleSubmission.to_csv(path + 'result.csv')
最后运行结果为:
预测准确率为85.49%