Keras-3 Keras With Otto Group

Otto 分类问题

这里,我们将对Otto数据集进行分类。

  • 本文主要参考 2.3 Introduction to Keras。个人觉得这是一个很好Keras教程,希望大家也去学习学习。
  • 关于Otto,可以在 otto group 找到更多详细的材料
  • 本文主要关注代码的实现,具体细节和基本概念不会详细展开

让我们开始吧

就像以前说过的那样,处理一个问题主要分为三个部分:数据准备,模型构建和模型优化

导入模块

这里遇到了新的模块

import numpy as np
import pandas as pd

from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder

from keras.utils import np_utils
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout

from keras.callbacks import EarlyStopping, ModelCheckpoint
## 数据准备 读取数据。数据可以在 [otto group](https://www.kaggle.com/c/otto-group-product-classification-challenge/data) 找到
train_path = './data/train.csv'
test_path = './data/test.csv'

df = pd.read_csv(train_path)
观察数据。有93个特征,最后一列是种类,第一列的id对于训练没有任何作用。
df.head()
idfeat_1feat_2feat_3feat_4feat_5feat_6feat_7feat_8feat_9feat_85feat_86feat_87feat_88feat_89feat_90feat_91feat_92feat_93target
01100000000100000000Class_1
12000000010000000000Class_1
23000000010000000000Class_1
34100161500012000000Class_1
45000000000100001000Class_1

5 rows × 95 columns

导入数据。

  • 第一列id对训练没用,所以我们不需要它。
  • train 和 test 两个文件有所区别(test中没有给出target)
def load_data(path, train=True):
    df = pd.read_csv(path)
    X = df.values.copy()

    if train:
        np.random.shuffle(X)
        X, label = X[:, 1:-1].astype(np.float32), X[:, -1]
        return X, label
    else:
        X, ids = X[:, 1:].astype(np.float32), X[:, 0].astype(str)
        return X, ids
X_train, y_train = load_data(train_path)
X_test, ids = load_data(test_path, train=False)

预处理,训练数据和测试数据一起归一化,以免忘记了

def preprocess_data(X, scaler=None):
    if not scaler:
        scaler = StandardScaler()
        scaler.fit(X)
    X = scaler.transform(X)
    return X, scaler
X_train, scaler = preprocess_data(X_train)
X_test, _ = preprocess_data(X_test, scaler)

One-hot 编码

def preprocess_label(labels, encoder=None, categorical=True):
    if not encoder:
        encoder = LabelEncoder()
        encoder.fit(labels)
    y = encoder.transform(labels).astype(np.int32)
    if categorical:
        y = np_utils.to_categorical(y)
    return y, encoder
y_train, encoder = preprocess_label(y_train)

搭建网络模型

dim = X_train.shape[1]
print(dim, 'dims')
print('Building model')

nb_classes = y_train.shape[1]

model = Sequential()

model.add(Dense(256, input_shape=(dim, )))
model.add(Activation('relu'))
model.add(Dropout(0.5))

model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))

model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))

model.add(Dense(nb_classes))
model.add(Activation('softmax'))
93 dims
Building model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
batch_size = 128
epochs = 2

训练,同时保持最佳模型

fBestModel = 'best_model.h5'
early_stop = EarlyStopping(monitor='val_acc', patience=5, verbose=1)
best_model = ModelCheckpoint(fBestModel, verbose=0, save_best_only=True)

model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size, verbose=1, validation_split=0.1, callbacks=[best_model, early_stop])
Train on 55690 samples, validate on 6188 samples
Epoch 1/2
55690/55690 [==============================] - 2s 42us/step - loss: 0.5256 - acc: 0.7967 - val_loss: 0.5268 - val_acc: 0.7982
Epoch 2/2
55690/55690 [==============================] - 2s 42us/step - loss: 0.5251 - acc: 0.7991 - val_loss: 0.5256 - val_acc: 0.8017





<keras.callbacks.History at 0x13551adaef0>

预测并保存结果。将结果保存为Kaggle上要求的格式,然后提交了测试结果,得到了0.5左右的分数,据说大概前50%左右

prediction = model.predict(X_test)
num_pre = prediction.shape[0]
columns = ['Class_'+str(post+1) for post in range(9)]

df2 = pd.DataFrame({'id' : range(1,num_pre+1)})
df3 = pd.DataFrame(prediction, columns=columns)

df_pre = pd.concat([df2, df3], axis=1)
df_pre.to_csv('predition.csv', index=False)
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值