基于mxnet的Regression问题Kaggle比赛代码框架

一、概述

书中3.16节扩展一下可以作为kaggle比赛的框架,这个赛题的名字是House Prices: Advanced Regression Techniques,是一个Regression问题。

二、Deeplearning的一般流程

结合李航《统计学习方法》中对机器学习流程的总结,分为data、model、strategy、algorithm、training、prediction

1、 Data

1.1、read data

# read data
train_data = pd.read_csv('./d2l-zh-1.1/data/kaggle_house_pred_train.csv')
test_data = pd.read_csv('./d2l-zh-1.1/data/kaggle_house_pred_test.csv')
# print(train_data.shape)
# print(train_data.iloc[0:4, [0, 1, 2, -1, -2, -3]])

1.2、preprocess data

# standardization to numeric type
all_features = pd.concat((train_data.iloc[:, 1:-1], test_data.iloc[:, 1:]))
numeric_features = all_features.dtypes[all_features.dtypes != 'object'].index
all_features[numeric_features] = all_features[numeric_features].apply(
    lambda x: (x - x.mean()) / x.std())
# 标准化后,每个特征的均值变为0,所以可以直接用0来替换缺失值
all_features[numeric_features] = all_features[numeric_features].fillna(0)

# convert discrete value to dummy variable
all_features = pd.get_dummies(all_features, dummy_na=True)

# get train and test data
n_train = train_data.shape[0]
train_features = nd.array(all_features[:n_train].values)
test_features = nd.array(all_features[n_train:].values)
train_labels = nd.array(train_data['SalePrice'].values).reshape((-1, 1))

1.3、get_k_fold_data

# k folds validation
def get_k_fold_data(k, i, X, y):
    assert k > 1
    fold_size = X.shape[0] // k
    X_train, y_train, X_valid, y_valid = None, None, None, None
    for j in range(k):
        idx = slice(j * fold_size, (j + 1) * fold_size)
        X_part, y_part = X[idx, :], y[idx]
        if j == i:
            X_valid, y_valid = X_part, y_part
        elif X_train is None:
            X_train, y_train = X_part, y_part
        else:
            X_train = nd.concat(X_train, X_part, dim=0)
            y_train = nd.concat(y_train, y_part, dim=0)
    return X_train, y_train, X_valid, y_valid

2、Model

def get_net():
    net = nn.Sequential()
    net.add(nn.Dense(256, activation='relu'),
            nn.Dropout(0.5),
            nn.Dense(1))
    net.initialize()
    return net

3、Strategy

loss = gloss.L2Loss()

4、Algorithm

# loss = gloss.L2Loss()

5、Training

# training
def train(net, train_iter, train_features, train_labels, test_features, test_labels,
          loss, num_epochs, trainer, batch_size):
    train_ls, test_ls = [], []
    for epoch in range(num_epochs):
        for X, y in train_iter:
            with autograd.record():
                l = loss(net(X), y)
            l.backward()
            trainer.step(batch_size)
        train_ls.append(log_rmse(net, train_features, train_labels))
        if test_labels is not None:
            test_ls.append(log_rmse(net, test_features, test_labels))
    return train_ls, test_ls

6、Validation

def k_fold(k, X_train, y_train, num_epochs, learning_rate, weight_decay, batch_size):
    train_l_sum, valid_l_sum = 0.0, 0.0
    for i in range(k):
        # data
        data = get_k_fold_data(k, i, X_train, y_train)
        train_features, train_labels, _, _ = data
        train_iter = gdata.DataLoader(gdata.ArrayDataset(train_features, train_labels), batch_size, shuffle=True)
        # model
        net = get_net()
        # strategy
        loss = gloss.L2Loss()
        # algorithm
        trainer = gluon.Trainer(net.collect_params(), 'adam',
                                {'learning_rate': learning_rate, 'wd': weight_decay})
        # training
        train_ls, valid_ls = train(net, train_iter, *data, loss, num_epochs, trainer, batch_size)
        train_l_sum += train_ls[-1]
        valid_l_sum += valid_ls[-1]
        if i == 0:
            d2l.semilogy(range(1, num_epochs + 1), train_ls, 'epochs', 'rmse',
                         range(1, num_epochs + 1), valid_ls, ['train', 'valid'])
        print('fold %d, train rmse %f, valid rmse %f' % (i, train_ls[-1], valid_ls[-1]))
    return train_l_sum / k, valid_l_sum / k


# model selection
k, num_epochs, lr, weight_decay, batch_size = 5, 500, 0.01, 512, 64
train_l, valid_l = k_fold(k, train_features, train_labels, num_epochs, lr,
                          weight_decay, batch_size)
print('%d-fold validation: avg train rmse %f, avg valid rmse %f' % (k, train_l, valid_l))

7、Prediction

train_and_pred(train_features, test_features, train_labels, test_data,
               num_epochs, lr, weight_decay, batch_size)
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值