TensorFlow 2.0 快速入门进阶教程

Colab 传送门

译自:TensorFlow 官方教程

在进阶教程里,我们将使用 keras 的模型子类化(subclassing) API 建立模型。

首先导入需要的库

from __future__ import absolute_import, division, print_function

import tensorflow_datasets as tfds
import tensorflow as tf

from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model

这里使用了 tensorflow_datasets,预置了很多常用数据集。

下面载入 MNIST 数据集,并将整型转换为浮点数

dataset, info = tfds.load('mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = dataset['train'], dataset['test']

def convert_types(image, label):
  image = tf.cast(image, tf.float32)
  image /= 255
  return image, label

mnist_train = mnist_train.map(convert_types).shuffle(10000).batch(32)
mnist_test = mnist_test.map(convert_types).batch(32)

map 函数将 convert_types 应用至数据集中的每个元素。shuffle 用于打乱数据集。batch 表示批量大小。

下面使用 tf.keras 建立模型

class MyModel(Model):
  def __init__(self):
    super(MyModel, self).__init__()
    self.conv1 = Conv2D(32, 3, activation='relu')
    self.flatten = Flatten()
    self.d1 = Dense(128, activation='relu')
    self.d2 = Dense(10, activation='softmax')

  def call(self, x):
    x = self.conv1(x)
    x = self.flatten(x)
    x = self.d1(x)
    return self.d2(x)
  
model = MyModel()

然后定义优化器和损失函数

loss_object = tf.keras.losses.SparseCategoricalCrossentropy()

optimizer = tf.keras.optimizers.Adam()

选择指标来度量模型的损失和准确率。这些指标对网络训练中计算的值进行累积。

train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')

test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')

使用 tf.GradientTape 来训练模型

@tf.function
def train_step(image, label):
  with tf.GradientTape() as tape:
    predictions = model(image)
    loss = loss_object(label, predictions)
  gradients = tape.gradient(loss, model.trainable_variables)
  optimizer.apply_gradients(zip(gradients, model.trainable_variables))
  
  train_loss(loss)
  train_accuracy(label, predictions)

测试函数

@tf.function
def test_step(image, label):
  predictions = model(image)
  t_loss = loss_object(label, predictions)
  
  test_loss(t_loss)
  test_accuracy(label, predictions)

开始训练模型,对模型训练 5 个周期。

EPOCHS = 5

for epoch in range(EPOCHS):
  for image, label in mnist_train:
    train_step(image, label)
  
  for test_image, test_label in mnist_test:
    test_step(test_image, test_label)
  
  template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
  print (template.format(epoch+1,
                         train_loss.result(), 
                         train_accuracy.result()*100,
                         test_loss.result(), 
                         test_accuracy.result()*100))
Epoch 1, Loss: 0.14260324835777283, Accuracy: 95.77166748046875, Test Loss: 0.05924011021852493, Test Accuracy: 98.04999542236328
Epoch 2, Loss: 0.09455230832099915, Accuracy: 97.16999816894531, Test Loss: 0.0570998452603817, Test Accuracy: 98.15999603271484
Epoch 3, Loss: 0.07151239365339279, Accuracy: 97.84610748291016, Test Loss: 0.0601537711918354, Test Accuracy: 98.10333251953125
Epoch 4, Loss: 0.05761364847421646, Accuracy: 98.25416564941406, Test Loss: 0.060210321098566055, Test Accuracy: 98.15750122070312
Epoch 5, Loss: 0.04829595237970352, Accuracy: 98.52733612060547, Test Loss: 0.06159723922610283, Test Accuracy: 98.18199920654297

可以看到,模型在测试集上达到了 98% 的准确率。

  • 3
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值