TensorFlow自学笔记 - 基于官方文档 01 helloworld

TensorFlow自学笔记 - 基于官方文档 01 helloworld

TensorFlow

深度学习已经在更大领域变成了基本的算法。四年前,在上大学我结识了Python语言,也开始了学习之旅。学着学着发现,对Python的学习逐渐被对包的学习而取代。两年前,进入研究生的学习后,导师问我想要做的研究方向,我说我想做图像处理,可是我导师是做测量的,于是对深度学习的学习就变成了一个不能说的爱好。两年的时间,日子在游戏中度过,而深度学习已经在各大领域如日中天,当初的一些怀疑已尘埃落定,而我也只能跟在最后面亦步亦趋,踽踽独行。上了研究生才发现,科研已经逐渐下放,科研的主力已经不是大学校园,而是科技公司。
现在深度学习的热度逐渐下降,未来的五至十年是硬件领域的竞争。谁的运算速度更快,谁的芯片更强,谁就占据了先锋。我们正在见证一个时代,虚拟现实、物联网、5g都不再是一个梦。而这些都需要市场和政策来推动。我们也在见证着,算法领域的一个小的没落,我们只能寄希望于基础学科的进步。
对于深度学习的学习,我将使用TensorFlow这个框架,没有什么特别的理由只是因为这个框架接触最早,对别的框架了解较少。而我的学习也将从官方文档入手。所有的程序都是对官方文档的复现,没有独立创作的成分,因此把题目命名为“自学笔记”。

程序复现

import tensorflow as tf

from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model

mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

# Add a channels dimension
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]
train_ds = tf.data.Dataset.from_tensor_slices(
    (x_train, y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)


class MyModel(Model):
    def __init__(self):
        super(MyModel, self).__init__()
        self.conv1 = Conv2D(32, 3, activation='relu')
        self.flatten = Flatten()
        self.d1 = Dense(128, activation='relu')
        self.d2 = Dense(10, activation='softmax')

    def call(self, x):
        x = self.conv1(x)
        x = self.flatten(x)
        x = self.d1(x)
        return self.d2(x)


model = MyModel()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()

optimizer = tf.keras.optimizers.Adam()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')

test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')


@tf.function
def train_step(images, labels):
    with tf.GradientTape() as tape:
        predictions = model(images)
        loss = loss_object(labels, predictions)
    gradients = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))

    train_loss(loss)
    train_accuracy(labels, predictions)
    

@tf.function
def test_step(images, labels):
    predictions = model(images)
    t_loss = loss_object(labels, predictions)
    test_loss(t_loss)
    test_accuracy(labels, predictions)


EPOCHS = 5
for epoch in range(EPOCHS):
    # 在下一个epoch开始时,重置评估指标
    train_loss.reset_states()
    train_accuracy.reset_states()
    test_loss.reset_states()
    test_accuracy.reset_states()

    for images, labels in train_ds:
        train_step(images, labels)

    for test_images, test_labels in test_ds:
        test_step(test_images, test_labels)

    template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
    print(template.format(epoch + 1,
                          train_loss.result(),
                          train_accuracy.result() * 100,
                          test_loss.result(),
                          test_accuracy.result() * 100))

结果:

Epoch 1, Loss: 0.13611699640750885, Accuracy: 95.91000366210938, Test Loss: 0.05991847440600395, Test Accuracy: 98.05999755859375
Epoch 2, Loss: 0.04061059653759003, Accuracy: 98.7550048828125, Test Loss: 0.05604378506541252, Test Accuracy: 98.18000030517578
Epoch 3, Loss: 0.020949935540556908, Accuracy: 99.27999877929688, Test Loss: 0.05019310861825943, Test Accuracy: 98.38999938964844
Epoch 4, Loss: 0.011376580223441124, Accuracy: 99.62999725341797, Test Loss: 0.04977274313569069, Test Accuracy: 98.58999633789062
Epoch 5, Loss: 0.009102041833102703, Accuracy: 99.6933364868164, Test Loss: 0.05662848800420761, Test Accuracy: 98.5999984741211

在第五次循环后准确率达到了98.5%。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值