项目来源
本项目是白嫖
自百度飞桨——PaddlePaddle的李宏毅机器学习特训营
的课程,
恰逢一卡难求,课程免费提供每天10小时的GPU算力资源:v100,16g大显存!
另外还有宝可梦训练师的独家福利视频菜单哟,机会难得,快来白嫖吧。
aistudio项目地址.
项目描述
- 本次作业的资料是从行政院环境环保署空气品质监测网所下载的观测资料。
- 希望大家能在本作业实现 linear regression 预测出 PM2.5 的数值。
数据集介绍
- 本次作业使用丰原站的观测记录,分成 train set 跟 test set,train set 是丰原站每个月的前 20 天所有资料。test set 则是从丰原站剩下的资料中取样出来。
- train.csv: 每个月前 20 天的完整资料。
- test.csv : 从剩下的资料当中取样出连续的 10 小时为一笔,前九小时的所有观测数据当作 feature,第十小时的 PM2.5 当作 answer。一共取出 240 笔不重複的 test data,请根据 feature 预测这 240 笔的 PM2.5。
- Data 含有 18 项观测数据 AMB_TEMP, CH4, CO, NHMC, NO, NO2, NOx, O3, PM10, PM2.5, RAINFALL, RH, SO2, THC, WD_HR, WIND_DIREC, WIND_SPEED, WS_HR。
项目要求
- 请手动实现 linear regression,方法限使用 gradient descent。
- 禁止使用 numpy.linalg.lstsq
数据准备
无
环境配置/安装
!pip install --upgrade pandas
Looking in indexes: https://mirror.baidu.com/pypi/simple/
Collecting pandas
[?25l Downloading https://mirror.baidu.com/pypi/packages/f3/d4/3fe3b5bf9886912b64ef040040aec356fa48825e5a829a84c2667afdf952/pandas-1.2.3-cp37-cp37m-manylinux1_x86_64.whl (9.9MB)
[K |████████████████████████████████| 9.9MB 32.1MB/s eta 0:00:01
[?25hRequirement already satisfied, skipping upgrade: pytz>=2017.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pandas) (2019.3)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.7.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pandas) (2.8.0)
Collecting numpy>=1.16.5 (from pandas)
[?25l Downloading https://mirror.baidu.com/pypi/packages/70/8a/064b4077e3d793f877e3b77aa64f56fa49a4d37236a53f78ee28be009a16/numpy-1.20.1-cp37-cp37m-manylinux2010_x86_64.whl (15.3MB)
[K |████████████████████████████████| 15.3MB 8.9MB/s eta 0:00:011
[?25hRequirement already satisfied, skipping upgrade: six>=1.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas) (1.15.0)
# 下面该你动手啦!
!pip list
import os
import pandas as pd
import numpy as np
data=pd.read_csv("./work/hw1_data/train.csv", encoding="big5")
data.head()
data.info()
data.describe()
data=data.iloc[:,3:]
data[data=='NR']=0
numpy_data=data.to_numpy()
data.head()
data.info()
# RangeIndex: 4320 entries, 0 to 4319 24列(小时),18个特征
# 4320 18= 12月 18 480每月小时数...os
# 每个月20天,480小时,9小时一个data,总计480-9-471小时。年度471*12个
# 每个data有9*18个特征
month_data={}
for month in range(12):
# 每月数据量
sample=np.empty([18,480])
# 每天数据量
for day in range(20):
# 每天24小时,对应这个18个*24小时个数据
sample[:, day*24:(day+1)*24]=numpy_data[18*(20*month +day): 18*(20*month +day+1),:]
month_data[month]=sample
# 数据
x=np.empty([12*471,18*9],dtype=float)
# pm2.5
y=np.empty([12*471,1],dtype=float)
for month in range(12):
for day in range(20):
for hour in range(24):
# 如果是最后一天,最后一个包结束,则返回
if day==19 and hour>14:
continue
# 每个小时的18项数据
x[month*471+day*24+hour,:]=month_data[month][:,day*24+hour:day*24+hour+9].reshape(1,-1)
# pm值
y[month*471+day*24+hour,0]=month_data[month][9,day*24+hour+9]
print(x)
print(y)
# 归一化
mean_x = np.mean(x, axis = 0) #18 * 9
std_x = np.std(x, axis = 0) #18 * 9
for i in range(len(x)): #12 * 471
for j in range(len(x[0])): #18 * 9
if std_x[j] != 0:
x[i][j] = (x[i][j] - mean_x[j]) / std_x[j]
x
import math
x_train_set = x[: math.floor(len(x) * 0.8), :]
y_train_set = y[: math.floor(len(y) * 0.8), :]
x_validation = x[math.floor(len(x) * 0.8): , :]
y_validation = y[math.floor(len(y) * 0.8): , :]
print(x_train_set)
print(y_train_set)
print(x_validation)
print(y_validation)
print(len(x_train_set))
print(len(y_train_set))
print(len(x_validation))
print(len(y_validation))
训练
# 1为常数项
dim = 18 * 9 + 1
w = np.zeros([dim, 1])
x = np.concatenate((np.ones([12 * 471, 1]), x), axis = 1).astype(float)
learning_rate = 100
iter_time = 1000
adagrad = np.zeros([dim, 1])
# 防止被除数为0
eps = 0.0000000001
for t in range(iter_time):
# 平方差
loss = np.sqrt(np.sum(np.power(np.dot(x, w) - y, 2))/471/12)#rmse
# 每100次输出一次
if(t%100==0):
print(str(t) + ":" + str(loss))
# 梯度
gradient = 2 * np.dot(x.transpose(), np.dot(x, w) - y) #dim*1
adagrad += gradient ** 2
w = w - learning_rate * gradient / np.sqrt(adagrad + eps)
np.save('weight.npy', w)
w
测试
test_data=pd.read_csv("./work/hw1_data/test.csv",header=None, encoding="big5")
test_data.head()
test_data=test_data.iloc[:,2:]
test_data.head()
# 同样设置0
test_data[test_data == 'NR'] = 0
test_data=test_data.to_numpy()
# 240个记录,18*9
test_x=np.empty([240,18*9], dtype=float)
for i in range(240):
test_x[i, :] = test_data[18 * i: 18* (i + 1), :].reshape(1, -1)
# 归一化
for i in range(len(test_x)):
for j in range(len(test_x[0])):
if std_x[j] != 0:
test_x[i][j] = (test_x[i][j] - mean_x[j]) / std_x[j]
test_x = np.concatenate((np.ones([240, 1]), test_x), axis = 1).astype(float)
test_x
预测
w=np.load("weight.npy")
ans_y=np.dot(test_x,w)
ans_y
保存
import csv
with open("submit.csv", mode="w",newline='') as submit_file:
csv_writer=csv.writer(submit_file)
header=['id','value']
csv_writer.writerow(header)
for i in range(240):
row=["id_" +str (i), ans_y[i][0]]
csv_writer.writerow(row)
print(row)