Tensorflow2.0入门教程11:多层感知机(MLP)模型搭建

MLP模型结构

在这里插入图片描述

MLP网络实现fashion_mnist图像分类

1.使用tf.keras.datasets获得数据集

2.构建模型

3.模型配置

4.训练模型

5.验证模型

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

一、数据获取及预处理: tf.keras.datasets

(train_data, train_label), (test_data, test_label) = tf.keras.datasets.fashion_mnist.load_data()
train_label
array([9, 0, 0, ..., 3, 0, 5], dtype=uint8)

图片数据归一化

train_data = train_data.astype(np.float32) / 255.0   # [60000, 28, 28]
test_data = test_data.astype(np.float32) / 255.0     # [10000, 28, 28]
train_label = train_label.astype(np.int32)    # [60000]
test_label = test_label.astype(np.int32)      # [10000]

查看数据形状

train_data.shape
(60000, 28, 28)
train_label.shape
(60000,)
train_label[0]
9
plt.imshow(train_data[0])
<matplotlib.image.AxesImage at 0x29f6580fef0>

在这里插入图片描述

二、搭建模型

class MLP(tf.keras.Model):
    def __init__(self):
        super().__init__()
        self.flatten = tf.keras.layers.Flatten()    # Flatten层将除第一维(batch_size)以外的维度展平
        self.dense1 = tf.keras.layers.Dense(units=100, activation=tf.nn.relu)
        self.dense2 = tf.keras.layers.Dense(units=10)

    def call(self, inputs):         # [128, 28, 28]
        x = self.flatten(inputs)    # [batch_size, 784]
        x = self.dense1(x)          # [batch_size, 100]
        x = self.dense2(x)          # [batch_size, 10]
        output = tf.nn.softmax(x)
        return output
model = MLP()

三、配置模型

categorical_crossentropy和sparse_categorical_crossentropy的区别:

  • categorical_crossentropy : one-hot编码:[0, 0, 1], [1, 0, 0], [0, 1, 0]
  • sparse_categorical_crossentropy:数字编码:2, 0, 1
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
                 loss = tf.keras.losses.sparse_categorical_crossentropy,
             metrics=[tf.keras.metrics.sparse_categorical_accuracy])
# model.compile(optimizer="adam",
#                  loss = "sparse_categorical_crossentropy",
#              metrics=['accuracy'])

四、训练模型

%%time
model.fit(train_data,train_label,epochs=10,batch_size=256,verbose=2,validation_split=0.1)
Train on 54000 samples, validate on 6000 samples
Epoch 1/10
54000/54000 - 1s - loss: 0.5759 - sparse_categorical_accuracy: 0.8003 - val_loss: 0.4204 - val_sparse_categorical_accuracy: 0.8468
Epoch 2/10
54000/54000 - 1s - loss: 0.3995 - sparse_categorical_accuracy: 0.8559 - val_loss: 0.4237 - val_sparse_categorical_accuracy: 0.8425
Epoch 3/10
54000/54000 - 1s - loss: 0.3585 - sparse_categorical_accuracy: 0.8712 - val_loss: 0.3900 - val_sparse_categorical_accuracy: 0.8528
Epoch 4/10
54000/54000 - 1s - loss: 0.3381 - sparse_categorical_accuracy: 0.8773 - val_loss: 0.3687 - val_sparse_categorical_accuracy: 0.8682
Epoch 5/10
54000/54000 - 1s - loss: 0.3271 - sparse_categorical_accuracy: 0.8803 - val_loss: 0.3496 - val_sparse_categorical_accuracy: 0.8750
Epoch 6/10
54000/54000 - 1s - loss: 0.3085 - sparse_categorical_accuracy: 0.8855 - val_loss: 0.3600 - val_sparse_categorical_accuracy: 0.8702
Epoch 7/10
54000/54000 - 1s - loss: 0.2965 - sparse_categorical_accuracy: 0.8911 - val_loss: 0.3409 - val_sparse_categorical_accuracy: 0.8772
Epoch 8/10
54000/54000 - 1s - loss: 0.2860 - sparse_categorical_accuracy: 0.8948 - val_loss: 0.3986 - val_sparse_categorical_accuracy: 0.8530
Epoch 9/10
54000/54000 - 1s - loss: 0.2855 - sparse_categorical_accuracy: 0.8940 - val_loss: 0.3443 - val_sparse_categorical_accuracy: 0.8758
Epoch 10/10
54000/54000 - 1s - loss: 0.2793 - sparse_categorical_accuracy: 0.8961 - val_loss: 0.3506 - val_sparse_categorical_accuracy: 0.8770
Wall time: 5.65 s
<tensorflow.python.keras.callbacks.History at 0x29f6440edd8>

五、验证模型

model.evaluate(test_data,test_label,verbose=2)
10000/1 - 0s - loss: 0.2453 - sparse_categorical_accuracy: 0.8684
[0.36767163943052295, 0.8684]

使用tf.data处理数据

(train_data, train_label), (test_data, test_label) = tf.keras.datasets.fashion_mnist.load_data()
def conver(img, label):
    img = tf.cast(img,tf.float32)/255.0
    label = tf.cast(label,tf.int32)
    return img, label
train_dataset = tf.data.Dataset.from_tensor_slices((train_data, train_label))
train_dataset = train_dataset.map(conver)
train_dataset = train_dataset.shuffle(buffer_size=1024)    
train_dataset = train_dataset.batch(256)
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)
test_dataset= tf.data.Dataset.from_tensor_slices((test_data, test_label))
test_dataset = test_dataset.map(conver,num_parallel_calls=tf.data.experimental.AUTOTUNE)
test_dataset = test_dataset.shuffle(buffer_size=1024)    
test_dataset = test_dataset.batch(256)
test_dataset = test_dataset.prefetch(tf.data.experimental.AUTOTUNE)
class MLP(tf.keras.Model):
    def __init__(self):
        super().__init__()
        self.flatten = tf.keras.layers.Flatten()    # Flatten层将除第一维(batch_size)以外的维度展平
        self.dense1 = tf.keras.layers.Dense(units=100, activation=tf.nn.relu)
        self.dense2 = tf.keras.layers.Dense(units=10)

    def call(self, inputs):         # [batch_size, 28, 28]
        x = self.flatten(inputs)    # [batch_size, 784]
        x = self.dense1(x)          # [batch_size, 100]
        x = self.dense2(x)          # [batch_size, 10]
        output = tf.nn.softmax(x)
        return output
model2 = MLP()
model2.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
                 loss = tf.keras.losses.sparse_categorical_crossentropy,
             metrics=[tf.keras.metrics.sparse_categorical_accuracy])
%%time
model2.fit(train_dataset,epochs=10,verbose=2,validation_data=test_dataset)
Epoch 1/10
235/235 - 3s - loss: 0.5467 - sparse_categorical_accuracy: 0.8081 - val_loss: 0.0000e+00 - val_sparse_categorical_accuracy: 0.0000e+00
Epoch 2/10
235/235 - 3s - loss: 0.3933 - sparse_categorical_accuracy: 0.8587 - val_loss: 0.4199 - val_sparse_categorical_accuracy: 0.8432
Epoch 3/10
235/235 - 3s - loss: 0.3553 - sparse_categorical_accuracy: 0.8709 - val_loss: 0.3952 - val_sparse_categorical_accuracy: 0.8590
Epoch 4/10
235/235 - 3s - loss: 0.3375 - sparse_categorical_accuracy: 0.8761 - val_loss: 0.4057 - val_sparse_categorical_accuracy: 0.8563
Epoch 5/10
235/235 - 3s - loss: 0.3236 - sparse_categorical_accuracy: 0.8813 - val_loss: 0.3860 - val_sparse_categorical_accuracy: 0.8620
Epoch 6/10
235/235 - 3s - loss: 0.3148 - sparse_categorical_accuracy: 0.8831 - val_loss: 0.3924 - val_sparse_categorical_accuracy: 0.8586
Epoch 7/10
235/235 - 3s - loss: 0.3080 - sparse_categorical_accuracy: 0.8862 - val_loss: 0.3717 - val_sparse_categorical_accuracy: 0.8679
Epoch 8/10
235/235 - 3s - loss: 0.2953 - sparse_categorical_accuracy: 0.8907 - val_loss: 0.3894 - val_sparse_categorical_accuracy: 0.8640
Epoch 9/10
235/235 - 3s - loss: 0.2963 - sparse_categorical_accuracy: 0.8906 - val_loss: 0.4110 - val_sparse_categorical_accuracy: 0.8585
Epoch 10/10
235/235 - 3s - loss: 0.2831 - sparse_categorical_accuracy: 0.8942 - val_loss: 0.3948 - val_sparse_categorical_accuracy: 0.8654
Wall time: 32.4 s
<tensorflow.python.keras.callbacks.History at 0x29f5f0e4a20>
model2.evaluate(test_dataset)
79/79 [==============================] - 1s 7ms/step - loss: 0.4055 - sparse_categorical_accuracy: 0.8629
[0.40553383642359625, 0.8629]
  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值